source
sequence | text
stringlengths 99
98.5k
|
---|---|
[
"stackoverflow",
"0005549714.txt"
] | Q:
Is using a unique datetime or timestamp for database records a bad idea?
I'm fairly new to the SQL world and am designing a database that will record real-time data at up to 32Hz. It seems logical to use the timestamp as part of the primary key, or at least to make it unique (and perhaps composite with other information). My question is about performance as the size of the table increases. Naively, I'm thinking that if the database must check the timestamp is unique everytime I insert a record, things will get really slow after a while. But then again this is probably something database optimizers solved years ago and it's perfectly efficient to define timestamp as unique on large scale databases.
Note I'm using MySQL and timestamp for my database. Also I don't actually need timestamp to be unique, it just makes me feel better knowing the schema is as tightly defined as possible.
A:
MySql's date/time types can not store fractions of a second. You would have to store the fraction in an extra column along side the datetime or timestamp column. The fraction would have to be supplied by the application.
Making the timestamp unique in a very large table would sure make the insertion slower. If it is real time capturing then it could make it nonviable at 32Hz.
|
[
"stackoverflow",
"0045786976.txt"
] | Q:
Set minimal ts duration in ffmpeg command
We are using ffmpeg to convert mp4 video file to hls.
When video is converted, it sometimes happens that last ts chunk is about 0.03s. And player stalls on this chunk for a while. Is there a special command in ffmpeg to set minimal ts duration? Or other way to avoid such ts chunks?
In our command to set up ts duration we use: -segment_time 5
A:
One solution is to concatenate chunks. If last ts chunk duration is 0.03s or less we can concatenate it with previous. This will help to avoid last chunk with little duration.
To concatenate chunks can be used this ffmpeg command:
ffmpeg -i "concat:input1.mpg|input2.mpg|input3.mpg" -c copy output.mpg
More information here.
|
[
"physics.stackexchange",
"0000401751.txt"
] | Q:
Imagine a steel bar floating in space. Assuming the bar wouldn't break or bend, if I shoot a bullet at one end, would it rotate, fly away, or both?
Is there a formula to calculate both the translational and rotational velocity? Does the bar always bend, and if so is there a formula on how it bends (maybe related to the velocity/force of the bullet)? What if the bar is very long, to the point where if the bar rotate it'd be at relativistic velocity?
What if I shoot at a string of cloth instead? Would the behavior of the bar/string be different if there's air drag?
Sorry for asking too many questions, but if you can answer any of them I'd be grateful.
A:
Imagine a steel bar floating in space.
With you so far.
Assuming the bar wouldn't break or bend, if I shoot a bullet at one end
What do you mean "one end"? Be specific. Precisely describe the bar and where the bullet hits it.
would it rotate, fly away, or both?
That depends entirely on where the bullet hits it. If you hit it in the middle of the end, then it simply moves. If you hit it on the side of the end, it rotates too. Which is trivially calculated using basic trigonometry.
|
[
"blender.stackexchange",
"0000026845.txt"
] | Q:
Center of gravity for an object
How can I reset or place the center of gravity for an active ridged body object? I am having an issue where the center of gravity for an object is placed far too high for accurate physics.
A:
The center of gravity is placed at the object origin. To put move the object origin, press ⎈ Ctrl⇧ Shift⎇ AltC.
Center of Mass will put the origin (center of gravity) at the center of mass, assuming a uniform density.
3D cursor will put the origin at the 3D cursor, allowing for more customized setups.
|
[
"stackoverflow",
"0008462856.txt"
] | Q:
how to get same password hash(md5()) as phpbb3
i have on my page phpbb3 and now I am also starting some advertisementing... So basicly want to have a form where i fill a username and password, then I want the script to hash and md5 the password (the same way as my phpbb3 does) and compare the password and username with table forum_users.... whatever I do I just cant make that works...
<?php
define('IN_PHPBB', true);
include ("../Forum/common.php");
include ("../Forum/includes/functions.php");
$pass = "password";
$hash = phpbb_hash($pass);
echo $hash;
?>
it doesnt do anything actually
A:
If your goal is to authenticate the username and password that your user is providing to you against what is in the database then this is all you should need:
<?php
/**
*
* Login script for phpBB using username/password
* Used for website authentication
*
*/
define('IN_PHPBB', true);
$phpbb_root_path = dirname(__FILE__) . '/./';
$phpEx = substr(strrchr(__FILE__, '.'), 1);
include("common.php");
// Start session management
$user->session_begin();
$auth->acl($user->data);
$user->setup();
$username = request_var('username', '');
$password = request_var('password', '');
if(isset($username) && isset($password))
{
$auth->login($username, $password, true);
}
?>
But if you still wish to figure out the PHPBB password encryption hash, it is no longer MD5 in version 3.0 or higher and is a custom hash. Take a look at this thread:
http://www.phpbb.com/community/viewtopic.php?f=71&t=585387
I hope this helps.
Pete
|
[
"stackoverflow",
"0034619302.txt"
] | Q:
AspectJ keep context around async method calls
I'm new to AspectJ and I'm trying to figure out, how too keep / track a context of multiple async method calls. Imagine the following code:
@TimerStart
public void doSomething() throws InterruptedException {
Thread.sleep(1000);
MyCallable callable = new MyCallable();
Future future = executorService.submit(callable );
}
private class MyCallable implements Callable {
@Override
public Object call() throws Exception {
someOtherMethod();
return null;
}
@TimerEnd
private void someOtherMethod() throws InterruptedException {
Thread.sleep(1000);
}
}
I'd like to measure the time passed between @TimerStart and @TimerEnd. I'm struggling with two problems right now:
How do I keep an object between to aspects. Fields in an aspect seem to be all static so what about concurrency issues...?
How do I get two advices, one executed before @TimerStart and one after @TimerEnd.
Currently I have something along the lines of this:
public aspect TimerAspect {
pointcut timerStart(Object object, TimerStart timed):
execution(@TimerStart * *(..)) && this(object) && @annotation(timed);
pointcut timerStop(Object object, TimerEnd timed):
cflow(execution(@TimerEnd * *(..)) && this(object) && @annotation(timed) && !within(FlowTimerAspect));
before(Object object, TimerStart timed): timerStart(object, timed) {
System.out.println("##### Flow timer START");
}
after(Object object, TimerEnd timed): timerStop(object, timed) {
System.out.println("##### Flow timer STOP");
}
However the only thing I get right now is a StackOverflowException (yeah I know - that's why I'm asking here).
EDIT:
I stumbled upon percflow which seems to do the trick BUT only when the @TimerStart and @TimerEnd appear in the same thread. Suggestions are highly appreciated!!
public aspect TimerAspect percflow(timerStart(Object, TimerStart)) {
private long context;
pointcut timerStart(Object object, TimerStart timed):
execution(@TimerStart * *(..)) && this(object) && @annotation(timed);
pointcut timerStop(Object object, TimerEnd timed):
execution(@TimerEnd * *(..)) && this(object) && @annotation(timed);
before(Object object, TimerStart timed): timerStart(object, timed) {
context = System.currentTimeMillis();
}
after(Object object, TimerEnd timed): timerStop(object, timed) {
long passed = System.currentTimeMillis() - context;
System.out.println("passed time: " + passed);
}
}
A:
Since you're planning to switch threads while measuring, the percflow instantiation method is not going to help you. You'll have to stick with the default singleton aspect and keep the timing values for the object of interest in a WeakHashMap. That way, you keep timings as long as the objects/threads associated with the timing are alive.
We'll need another annotation to mark the event of associating a new object (a Callable in this example) with your timing. Let's call this @TimerJoin. The @TimerJoin annotation would be analogous to your existing @TimerStart and @TimerEnd annotations. Your measuring aspect will look like this.
import java.util.Map;
import java.util.WeakHashMap;
public aspect TimerAspect {
private final Map<Object, Timer> objectTiming = new WeakHashMap<>();
private final ThreadLocal<Timer> currentThreadTimer = new ThreadLocal<>();
pointcut timerStart(Object object):
execution(@TimerStart * *(..)) && this(object);
pointcut timerStop(Object object):
execution(@TimerEnd * *(..)) && this(object);
pointcut timerJoin(Object object):
(execution(@TimerJoin * *(..)) || execution(@TimerJoin *.new(..)) )
&& this(object);
before(Object object): timerStart(object) {
Timer timer = new Timer();
timer.start();
objectTiming.put(object, timer);
currentThreadTimer.set(timer);
System.out.println("##### Flow timer START");
}
before(Object object): timerJoin(object) {
Timer timing = currentThreadTimer.get();
objectTiming.put(object, timing);
System.out.println("##### Flow timer JOIN");
}
after(Object object): timerStop(object) {
Timer timing = objectTiming.get(object);
timing.stop();
System.out.println("##### Flow timer STOP");
System.out.println("Elapsed: " + timing.getElapsed());
}
}
And the simple Timer.java class:
public class Timer {
private long start;
private long stop;
public long getStart() {
return start;
}
public long getStop() {
return stop;
}
public void start() {
start = System.currentTimeMillis();
}
public void stop() {
stop = System.currentTimeMillis();
}
public long getElapsed() {
return stop-start;
}
}
Modify your callable to mark it to join the timer on the current thread:
private class MyCallable implements Callable {
@TimerJoin
public MyCallable() {
}
@Override
public Object call() throws Exception {
someOtherMethod();
return null;
}
@TimerEnd
private void someOtherMethod() throws InterruptedException {
Thread.sleep(1000);
}
}
The rest of your code will be the same.
You may notice that the aspect is using a ThreadLocal as a means of storage for the current timer to be able to associate it with new objects. You may choose another kind of storage for this, but for the sake of the example, I tried to keep it simple. Also, again for the sake of simplicity, I left out any safety checks for nulls in the aspect. You'll need to handle the corner cases yourself.
|
[
"stackoverflow",
"0035421048.txt"
] | Q:
React component can't find function
My application has a component that creates a navigation bar at the top of certain pages. I want to show the 'logout' button ONLY if the user is currently logged in (there's a token stored in localStorage).
When the code below is run, the browser gives me the following error:
ReferenceError: Can't find variable: showLogout
import React from 'react'
import NavHelper from './components/nav-helper'
export default React.createClass({
render () {
return(
<NavHelper>
<nav className='top-nav top-nav-light cf' role='navigation'>
<input id='menu-toggle' className='menu-toggle' type='checkbox'/>
<label htmlFor='menu-toggle'>Menu</label>
<ul className='list-unstyled list-inline cf'>
<li><a href="/home">Website</a></li>
<li><a href='/languages'>Languages</a></li>
<li><a href='/topics'>Topics</a></li>
//==========================
{window.localStorage.token ? showLogout() : null}
//==========================
<li className='pull-right'><a href='/saved'>Saved</a></li>
</ul>
</nav>
<div className='container'>
{this.props.children}
</div>
</NavHelper>
)
},
showLogout() {
return (<li className='pull-right'><a href='/logout'>Logout</a></li>)
}
})
A:
Since this is a class you should refer to interior functions like this: this.showLogout()
|
[
"tex.stackexchange",
"0000094810.txt"
] | Q:
eso-pic colorgrid overwriting pdfpages page
this may be really obvious but I'm trying to use pdfpages to annotate a pdf "form". To help with filling it in, I'm trying to put a grid on the page.
The following code places a grid on the page but it covers the pdf so I cant see what I'm trying to fill.
What am I doing wrong? (MWE with \includepdf ..somepdf replaced with any pdf file/page)
\documentclass[10pt]{article}
\usepackage{grffile}
\usepackage[colorgrid,texcoord,gridunit=pt]{eso-pic}
\usepackage{pdfpages}
\begin{document}
\includepdf[pages=1-1]{"/home/me/somepdf"}
\end{document}
A:
Use
\usepackage[grid,
gridcolor=red!20,
subgridcolor=green!20,
%texcoord,
gridunit=pt]{eso-pic}
Code:
\documentclass[10pt]{article}
\usepackage{grffile}
\usepackage[grid,
gridcolor=red!20,
subgridcolor=green!20,
%texcoord,
gridunit=pt]{eso-pic}
\usepackage{pdfpages}
\begin{document}
\includepdf[pages=-]{snifs_fov}
\end{document}
|
[
"math.stackexchange",
"0001338009.txt"
] | Q:
What's the name of this type of a set?
So I have a set $\{i_1,i_3,i_5\}$. What do we call the following set? Is there a standard name for it?
$\emptyset, \{i_1\}, \{i_1,i_3\}, \{i_1,i_3,i_5\}$.
Note that we do not have $\{i_3,i_5\}$ in it so it is not a power set. It seems like it is a "naturally ordered poset" to me.
A:
It seems like you're really starting with an ordered set $(S,<)$. That is, if you just started with an arbitrary set with three elements $\{x,5,*\}$, there would be nothing to distinguish $\{x,*\}$ from $\{x,5\}$. It's important that you know $i_1$ comes first, followed by $i_3$, then $i_5$, right?
Then what you've written down is the set of all downwards-closed subsets of $S$. In general, given $(S,<)$, you can form $\{X\in\mathcal{P}(S)\mid \text{if }a\in X\text{ and }b<a\text{ then }b\in X\}$.
I'm extrapolating here, since you haven't made your intentions really clear. Does this capture what you had in mind?
|
[
"math.stackexchange",
"0000199539.txt"
] | Q:
Why is $\zeta ^0 = 1$ here under this isomorphism?
For the isomorphism of $U_8$ (where $U_8 = \{ z \in \mathbb{C} | z^8 = 1 \}$) with $\mathbb{Z}/8\mathbb Z$ in which $\zeta =e^{i2\pi/8} \mapsto 5$ and $\zeta \cdot \zeta = 5 +_8 5 =2$
Why is $\zeta^0 = 1$?
I cannot figure out how we get 0 in this.
EDIT: Directly quoting the problem
It can be shown that there is an isomprhism of $U_8$ with $\mathbb{Z}_8$ in which $\zeta = e^{i\pi/4} \leftrightarrow 5$ and $\zeta^2 = 2$. For $m=0$, we have $\zeta^0 = 1$ and for $m = 3$ we have $\zeta^2 \cdot \zeta =2 +_8 5=7$ and similarly $\zeta^4 = \zeta^2 \zeta^2 = 2 +_8 2 = 4$
ADDED QUESTION
Why is $2+_8 5= 7$? Why isn't it $2 + 5 - 8 = -1 $?
A:
In a homomorphism $f:G\to H$ of groups you always have $f(x^n)=f(x)^n$ for all $x\in G$, $n\in \mathbb Z$.
Since here $H=\mathbb Z/8\mathbb Z$ is written additively, the power is in fact multplication, i.e. $f(x^n)=n\cdot f(x)$ for $x\in G$, $n\in \mathbb Z$.
Especially, $f(\zeta^0)=0\cdot f(\zeta)=0$.
A:
For your additional question: (2 + 5) mod 8 = 7 mod 8, but 7 and -1 are in the same conjugacy class.
|
[
"meta.stackexchange",
"0000149399.txt"
] | Q:
Can you report someone for 'stealing' an answer?
I posted an answer recently, after I don't know a minute another person posts the same answer. I find this fine. Except that he didn't include somethings that I did.
So he edits his answer to mock out mine. Code that I didn't include because I thought it was self explanatory was added.
Is this flag-able? Should it be? Here is the post: How to convert from Shapes to string, int, etc...?
I understand that the particular question I was asking about IS basic. That it will draw similar conclusions, that's why I figured it wasn't the best example. I hope that explains myself a little bit for the downvotes.
A:
There are two questions here:
Should answers that don't add anything new be flagged?
Yes. Maybe. It depends a bit on the circumstances.
There are a lot of questions that are pretty basic and will inevitably attract multiples of essentially the same answer posted almost at the same time as different people read the question and realise they can post a solution. This is largely fine.
If someone comes along a year later and copy/pastes the same answer again, that answer should be flagged and possibly removed. Or at least a comment should be left for the answerer asking them to knock it off.
Did Andrew Cooper steal your answer?
I think you're making some big assumptions there.
There is a difference of 30 seconds between your answer and Andrew's. Both your answers addressed the same basic problem with the code sample in the question. If I were reading that question 45 minutes ago, I probably would've posted something virtually identical in response too.
To me this seems to fit the first category above. (And in the end it turned out that the asker simply forgot a method name.)
The best approach in situations like this, IMHO, is to make your answer stand out in any way you can - more (relevant!) information, clearer explanations, and so on.
|
[
"stackoverflow",
"0048250350.txt"
] | Q:
How to Guess schema in Mysqlinput on the fly in Talend
I've build a job that copy data from a mysql db table to b mysql table.
The table columns are the same except sometimes a new column can be added in table a db.
i want to retrieve all the columns from a to b but only those that exists in table b. i was able to put in the query specific select colume statment that exists in table b like:
select coulmn1,column2,columns3... from table a
the issue is if i add a new column in b that matches a the talend job schema in Mysqlinput should be changed as well cause i work with build in type.
Is there a way to force the schema columns during the job running?
A:
If you are using a subscription version of Talend, you can use the dynamic column type. You can define a single column for your input of type "Dynamic" and map it to a column of the same type in your output component. This will dynamically get columns from table a and map them to the same columns in table b. Here's an example.
If you are using Talend Open Studio, things get a little trickier as Talend expects a list of columns for the input and output components that need to be defined at design time.
Here's a solution I put together to work around this limitation.
The idea is to list all table a's columns that are present in table b. Then convert it to a comma separated list of columns, in my example id,Theme,name and store it in a global variable COLUMN_LIST. A second output of the tMap builds the same list of columns, but this time putting single quotes between columns (so as they can be used as parameters to the CONCAT function later), then add single quotes to the beginning and end, like so: "'", id,"','",Theme,"','",name,"'" and store it in a global variable CONCAT_LIST.
On the next subjob, I query table a using the CONCAT function, giving it the list of columns to be concatenated CONCAT_LIST, thus retrieving each record in a single column like so 'value1', 'value2',..etc
Then at last I execute an INSERT query against table b, by specifying the list of columns given by the global variable COLUMN_LIST, and the values to be inserted as a single string resulting from the CONCAT function (row6.values).
This solution is generic, if you replace your table names by context variables, you can use it to copy data from any MySQL table to another table.
|
[
"stackoverflow",
"0049752744.txt"
] | Q:
How to set auth token programatically in init.groovy.d for a job to trigger builds remotely
I am able to use Jenkins.instance.getJob('job-name').getAuthToken() in order to get the auth token that is already saved for the job. But I didn't find any setter function to set that value. Nor do I know how to actually enable the Trigger builds remotely (e.g., from scripts) option programatically. The code online is erratic at best (for me that is). Any help would be much appreciated. Thanks.
A:
I had tried editing the config file of the job using this answer. Then I found out I could do the following:
AbstractItem it = (AbstractItem)Jenkins.getInstance().getItem('url-trigger-test')
if(it.authToken instanceof hudson.model.BuildAuthorizationToken) {
println(it.authToken.getToken())
it.authToken = new hudson.model.BuildAuthorizationToken('anotherToken')
it.save()
}
Hope this helps someone. Thanks.
|
[
"stackoverflow",
"0016645779.txt"
] | Q:
REST questions: advantages of REST over XML-RPC
What is the advantage of using the REST verbs GET, POST, PUT, DELETE
instead of just using POST and embedding an XML-RPC operation description
in the POST body describing what we want to insert, update, or delete.
Plus, what if we wanted to do more than one of these operations in one
go, wouldn't the rest design be useless in this case, as REST does not
seem to support transactions or multiple operations in one go.
Thanks.
A:
Below are my thoughts on why i prefer REST services over others.
REST uses the HTTP verbs GET, POST, PUT, DELETE to convey the intention of the service.
A majority of the HTTP framework is used as is.
In my opinion, there is no need to circumvent the HTTP to build a new protocol. Only an understanding of HTTP is required to build and use RESTful services. I think a carefully designed service acting on any resource will fit into the RESTful model.
In a RESTful service, the operation on the resource is intuitive. So, when a GET operation is made on a resource, it is imperative that the operation is idempotent.
REST enables to use lighter payload. It so easy and light to make an ajax call to a RESTful service.
It provides the ability to service different request/response formats like json, xml etc by the same service.
Event though there is no ws transaction type support in REST, a RESTful service can be built to achieve the same consistent state.
There can be cases where REST is not appropriate, that depends on the architecture and the contract required for the solution.
|
[
"stackoverflow",
"0041680339.txt"
] | Q:
Xamarin Android deep linking not working
I am using Xamarin to develop an Android application. I want to be able to open the app when the user opens the link example://gizmos, so I add this to my manifest file:
<activity android:name="mynamespace.MyActivity"
android:label="@string/application_name" >
<intent-filter android:label="@string/application_name">
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<!-- Accepts URIs that begin with "http://www.example.com/gizmos” -->
<data android:scheme="http"
android:host="www.example.com"
android:pathPrefix="/gizmos" />
<!-- note that the leading "/" is required for pathPrefix-->
<!-- Accepts URIs that begin with "example://gizmos” -->
<data android:scheme="example"
android:host="gizmos" />
</intent-filter>
</activity>
This is taken directly from the Android documentation. I try to click on the link example://gizmos from the mail application on my physical Android device, but I get the message: Unable to find application to perform this action
EDIT
It is not the same as the suggested duplicate, they are not using Xamarin.
A:
In Xamarin android the activity configuration is set in attribute of the activity class
For example:
namespace XamarinAndroidDeepLink
{
[Activity(Label = "XamarinAndroidDeepLink", MainLauncher = true, Icon = "@drawable/icon")]
[IntentFilter(new[] { Android.Content.Intent.ActionView },
DataScheme = "wori",
DataHost = "example.com",
DataPathPrefix ="/",
Categories = new[] { Android.Content.Intent.CategoryDefault,Android.Content.Intent.CategoryBrowsable })]
public class MainActivity : Activity
{
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
// Set our view from the "main" layout resource
SetContentView (Resource.Layout.Main);
}
}
}
And you do not need to set the intent filter in the Manifest the c# will help you to build the configuration in the manifest.
Test the deeplink by adb :
adb shell am start -W -a android.intent.action.VIEW -d "wori://example.com/?id=1234" XamarinAndroidDeepLink.XamarinAndroidDeepLink
You will find your app start :
Some Browser can not distinguish the url. They will add http:// before your customer url and when you input the url in the address bar it will using the search engine.
I suggest you to design you own html page and download google chrome to open the html page:
Note: Do not open the html page by html viewer
<html>
<head>
<title>Product 12345</title>
</head>
<body>
<a href="wori://example.com/?id=1234">lalala</a>
</body>
</html>
Download Google Chrome and open your link :
|
[
"stackoverflow",
"0038611797.txt"
] | Q:
Unauthorized Operation When Querying Event Logs
I have the following code for querying some events on a remote computer:
filter = $"*[System[(EventID='5061' or EventID='5058') and TimeCreated[timediff(@SystemTime) <= {Timespan}]]]";
EventLogSession session;
using (var pw = GetPassword())
{
session = new EventLogSession(
"PCNAME",
"DOMAIN",
"USERNAME",
pw,
SessionAuthentication.Default);
}
var query = new EventLogQuery("Security", PathType.LogName, filter)
{ Session = session };
var reader = EventLogReader(query);
When we reach the last line, EventLogReader(query) throws an error:
Attempted to perform an unauthorized operation.
Where user USERNAME is a member of the Event Log Readers group on AD in the same domain. Is there some other group that he needs to be a member of? Or is there some way of configuring the Event Log Readers group to allow certain types of access?
A:
This was happening because the user specified in in EventLogSession did not have local admin rights on the PC being queried.
After adding "USER" as a local admin on "PCNAME", I was able to query the logs successfully.
I thought this had already been set up, but because "USER" was added as an admin to all PCs via a script, the list of computers that it applied to must have been incomplete due to a bug in that script.
|
[
"stackoverflow",
"0003014740.txt"
] | Q:
JavaScript split function
i like to split a string depending on "," character using JavaScript
example
var mystring="1=name1,2=name2,3=name3";
need output like this
1=name1
2=name2
3=name3
A:
var list = mystring.split(',');
Now you have an array with ['1=name1', '2=name2', '3=name3']
If you then want to output it all separated by spaces you can do:
var spaces = list.join("\n");
Of course, if that's really the ultimate goal, you could also just replace commas with spaces:
var spaces = mystring.replace(/,/g, "\n");
(Edit: Your original post didn't have your intended output in a code block, so I thought you were after spaces. Fortunately, the same techniques work to get multiple lines.)
A:
Just use string.split() like this:
var mystring="1=name1,2=name2,3=name3";
var arr = mystring.split(','); //array of ["1=name1", "2=name2", "3=name3"]
If you the want string version of result (unclear from your question), call .join() like this:
var newstring = arr.join(' '); //(though replace would do it this example)
Or loop though, etc:
for(var i = 0; i < arr.length; i++) {
alert(arr[i]);
}
You can play with it a bit here
|
[
"stackoverflow",
"0059891417.txt"
] | Q:
fseek(): stream does not support seeking
Read the .txt file and get the last line of the text file, the function that I used is commented below
function read_last_line (){
$line = '';
$f = fopen('localpath\data.txt', 'r');
$cursor = -1;
fseek($f, $cursor, SEEK_END);
$char = fgetc($f);
/**
* Trim trailing newline chars of the file
*/
while ($char === "\n" || $char === "\r") {
fseek($f, $cursor--, SEEK_END);
$char = fgetc($f);
}
/**
* Read until the start of the file or first newline char
*/
while ($char !== false && $char !== "\n" && $char !== "\r") {
/**
* Prepend the new char
*/
$line = $char . $line;
fseek($f, $cursor--, SEEK_END);
$char = fgetc($f);
}
echo $line;
}
While Executing the above code, I get the fseek(): stream does not support seeking, Need a solution to resolve the issue.
A:
All forms of seeking is not supported and will fail in some cases.
For more detail check documentation
https://www.php.net/manual/en/function.fseek.php
|
[
"photo.stackexchange",
"0000065035.txt"
] | Q:
Canon 50mm Macro verus 100mm Macro Lens
I'm considering purchasing either the Canon 50mm Macro or the 100mm Macro with IS.
From what I've researched the only real difference is how close you have to be to your object to take the pictures.
I do get that the smaller lens doesn't have IS, and that could be important depending on the object you are photographing.
As a novice/intermediate photographer, is it worth spending nearly $500.00 more for that extra 50 mm?
If it matters, I am using the Canon T5i.
A:
The 50mm f/2.5 Compact Macro lens is not a "true" macro lens. It does not magnify 1:1 (i.e., 1:1 means that the size of the image on the sensor is the same as the actual size of the object); it only magnifies 1:2.5. So, it doesn't let you get as close as a true macro lens like the EF 100mm f/2.8L IS USM Macro.
In addition, the design's a lot older. It was introduced in 1987, while the 100L Macro is a much newer digital-era design from 2009. The 100L is also an L lens (Canon's "luxury" line of pro lenses which are typically considered their best (but most expensive) offerings), with a UD (ultra-low dispersion) element, and its "Hybrid" IS unit is also relatively special in that it can correct for two types of shake (shift and angular), rather than just one (angular), like most of Canon's IS lenses do. Whether it's "worth" the additional cost is up to you, and how much handholding you plan on doing.
"True" macro Canon alternatives you could consider would be the older non-L EF 100mm f/2.8 USM Macro (note there's no IS), or the EF-S 60mm f/2.8 USM Macro lens (which can only be used on crop). These both cost more than the 50mm compact macro, but far less than the 100L Macro lens. Depending on what you plan to shoot, the longer lenses may be more useful by allowing you to use a larger working distance and not scare off critters that can hop/fly/crawl away when they sense you looming right over them; although, of course, with wildlife of any kind, field craft will be more important than the length of your glass. Shorter lenses and small working distances, however, are less of an issue with flower or product shooting.
I once shot a pretty bug with a midnight blue body and bright orange wings with my EF-S 60mm f/2.8 USM Macro on an XT/350D. It was only later that I realized I'd shot a tarantula hawk--a very large hornet with a very painful sting--from a distance of roughly six inches. If I do it again, it'll be with my EF 400mm f/5.6L USM on extension tubes. Maybe. If I don't run screaming first. :D
|
[
"stackoverflow",
"0006472564.txt"
] | Q:
dblclick event in Firefox extension
I'm using the following code in firefox extension, that should alert when double click event occur, but when I double click nothing happens.
var Test = {
x: function(e) {
alert(e.target.defaultView.location.href);
}
}
window.addEventListener("dblclick", function(e) { Test.x(); }. false);
A:
Try changing the dot to a comma before the last parameter:
window.addEventListener("dblclick", function(e) { Test.x(); }, false);
// .^.
// | here...
Update
Your closure also expects a parameter e to be passed to it:
window.addEventListener("dblclick", function(e) { Test.x(e); }, false);
|
[
"stackoverflow",
"0019960866.txt"
] | Q:
How to store rspec test messages
Following is my sample code for testing:
require 'spec_helper'
describe Book do
before :each do
@book = Book.new 'Title', 'Author', :category
end
describe "#title" do
it 'returns the correct title' do
@book.title.should == 'Title'
end
end
describe "#author" do
it 'returns the correct author' do
@book.author.should == 'Author'
end
end
end
Here, we've two tests:
Book #title returns the correct title
Book #author returns the correct author
These above messages are displayed only when tests fail.
I've to save those two test messages and their respective results in log file. In order to achieve that I'll first have to store those test messages in objects. How can I store those test messages in object? So that I can use them while writing to log file?
A:
You can use the documentation format when you run RSpec: rspec --format doc which will output all of the results. If you want, you can redirect that to a file.
|
[
"math.stackexchange",
"0000145228.txt"
] | Q:
Formula for the number of latin squares of size $n$?
Is there a "easy to compute" formula for the number of Latin Squares or the number of reduced Latin squares of size $n$?
A:
When it comes down to it, there has only been one important step towards an "easy to compute" formula for $L_n$, the number of Latin squares of order $n$. This was by Sade:
A. A. Sade, Énumération des carrés latins, application au 7e ordre, conjecture pour les orders supérieurs, privately published, (1948). 8 pp.
Sade gave a method for non-constructive counting of Latin squares. More specifically, he clumped together Latin rectangles that have the same number of completions (as recognised by their "template"). Subsequent authors have merely implemented Sade's method on a computer...
For order 10:
B. D. McKay and E. Rogoyski, Latin squares of order ten, Electron. J. Combin.,
2 (1995). N3, 4 pp.
For order 11:
B. D. McKay and I. M. Wanless, On the number of Latin squares, Ann. Comb.,
9 (2005), pp. 335–344.
I discuss this in my survey paper and PhD thesis (both are open access).
I have often thought about performing the computation for $12 \times 12$ Latin squares, which would be a particularly ambitious project. The difference between $11 \times 11$ and $12 \times 12$ is huge. Here are some quotes which highlight the difficulty (here $R_n=n!(n-1)!L_n$):
McKay, Wanless (2005) write:
It is unlikely that $R_{12}$ will be computable by the same method for some time, since the number of regular bipartite graphs of order 24 and degree 6 is more than $10^{11}$.
Stones, Wanless (2012) (link) write:
McKay and Rogoyski and Kuznetsov gave $R_{12} \approx 1.62 \cdot 10^{44} > 3 \cdot 10^{10} \cdot R_{11}$... Barring the discovery of a faster algorithm, the evaluation of $R_{12}$ will remain infeasible for some years yet.
To me, it doesn't seem completely out-of-the-question to find $R_{12}$ within the next 30 or so years. It would require the use of some kind of supercomputer, and (presumably) a hybrid GPU-CPU implementation. However, the cost and effort required to do this would probably far exceed the benefit of finding $R_{12}$.
Estimates for the number of Latin squares of order $n$ are around, however, most recently by:
C. Zhang and J. Ma, Counting solutions for the N-queens and Latin square problems by Monte Carlo simulations, Phys. Rev. E, 79 (2009). 016703.
Other comments:
There are several known bounds and congruences for $L_n$.
There are many formulae for $L_n$ (which are impractical to use, but theoretically interesting).
Counting $k \times n$ Latin rectangles, for fixed $k$, is theoretically "easy", since these number satisfy linear recurrences, as proved in:
I. M. Gessel, Counting Latin rectangles, Bull. Amer. Math. Soc. 16 (1) (1987) 79–83.
(see also Doyle's formula, which is currently the best way to count $k \times n$ Latin rectangle for $n \leq 6$.)
I don't think it's possible to give a proper answer to the question: why is it hard to count Latin squares? It might be easy, and we haven't been clever enough yet. But the general idea as to why I think it is hard, is that, when constructing Latin squares, early decisions can affect the number of completions (and thus need to be accounted for in an enumeration). Compare to row-Latin squares ($n \times n$ matrices in which each symbol appears in every row, and we allow duplicates in columns). The number here is $n!^n$. Here, early decisions don't affect the number of completions.
A:
No - if there was then OEIS A002860 and A000315 would have a formula: instead it labels the calculation as hard.
The numbers have been calculated up to $n=11$ in McKay, B. D. and Wanless, I. M. "On the Number of Latin Squares." Ann. Combin. 9, 335-344, 2005 which you can read (with subsciption) at Springer or (without) at arXiv.
|
[
"stackoverflow",
"0060929174.txt"
] | Q:
Angular ng-if hiding the elements without any condition on them
I have just began working in angular after months of working on VueJs. I am facing a confusing problem here.
<div class="form-group">
<label class="col-sm-3 control-label">ABS</label>
<div class="col-sm-6">
<input type="text" ng-model="absTarget" class="form-control" placeholder="Set Target" />
</div>
</div>
<div class="form-group" ng-if="SelectedTask.location">
<label class="col-sm-3 control-label">Location</label>
<div ng-dropdown-multiselect="" options="example10data" selected-model="absLocation" checkboxes="true" extra-settings="setting1" />
</div>
<br />
<div class="form-group">
<label class="col-sm-3 control-label">Seven Star</label>
<div class="col-sm-6">
<input type="text" ng-model="ssTarget" class="form-control" placeholder="Set Target" />
</div>
</div>
<div class="form-group" ng-if="SelectedTask.location">
<label class="col-sm-3 control-label">Location</label>
<div ng-dropdown-multiselect="" options="example10data" selected-model="ssLocation" checkboxes="true" extra-settings="setting1" />
</div>
<div class="form-group">
<label class="col-sm-3 control-label">DEN</label>
<div class="col-sm-6">
<input type="text" ng-model="denTarget" class="form-control" placeholder="Set Target" />
</div>
</div>
<div class="form-group" ng-if="SelectedTask.location">
<label class="col-sm-3 control-label">Location</label>
<div ng-dropdown-multiselect="" options="example10data" selected-model="denLocation" checkboxes="true" extra-settings="setting1" />
</div>
This is my selected task object :
{"name":"SMS report","location":false}
So basically I have to show the 2nd,4th and 6th elements only when SelectedTask.location is true.
The problem is whenever SelectedTask.location equals false, then only 1st div element is visible, rest all becomes hidden
A:
Self-closing <div> tag may not be supported by your parser. See here.
So the inner div of the first element isn't closed where you think it does. Which leaves the first element's outer div applied to all the following elements. Quick solution would be manually close all the inner divs.
<div class="form-group" ng-if="SelectedTask.location">
<label class="col-sm-3 control-label">Location</label>
<div ng-dropdown-multiselect="" options="example10data" selected-model="absLocation" checkboxes="true" extra-settings="setting1"></div>
</div>
<br />
|
[
"stackoverflow",
"0051969240.txt"
] | Q:
Script to create multiple GCE VMs simultaneously
I have a basic SH script that I use to create multiple VMs on GCP, and it works fine, but sequentially. When a number of VMs is say above 4 or 5, it becomes a material delay of time. I noticed that in platforms like Dataflow or Dataproc, an arbitrary number of VMs gets created virtually simultaneously. Is there a way to mimic that functionality in GCE? (after all, these seem to be basic GCE machines anyway).
Right now I use the following (simplified) script:
vms=4
for i in `seq 1 $vms`
do
gcloud compute --project PROJECT disks create VM"$i" --size 50 --zone ZONE --type "pd-ssd"
gcloud beta compute --project=PROJECT instances create VM"$i" --zone=ZONE --machine-type=MACHINE_TYPE --subnet=default --maintenance-policy=MIGRATE --scopes=https://www.googleapis.com/auth/cloud-platform --disk=name=VM"$i",device-name=VM"$i",mode=rw,boot=yes,auto-delete=yes
done
Thanks for any suggestions!
A:
You can create multiple similar VMs faster by creating a group of managed VMs.
First, create an instance template, specifying the VM configuration that you need:
gcloud compute instance-templates create TEMPLATE_NAME \
--machine-type MACHINE_TYPE \
--image-project IMAGE_PROJECT \ # project where your boot disk image is stored
--image IMAGE \ # boot disk image name
--boot-disk-type pd-ssd \
--boot-disk-size 50GB \
--boot-disk-auto-delete \
--boot-disk-device-name DEVICE_NAME \ # boot disk device name, the same for all VMs
--subnet default \
--maintenance-policy MIGRATE \
[...]
Note:
You specify boot disk as part of instance template.
No need to specify zone for instance template. You will specify desired zone at instance group creation time.
Device name is the same for boot disks of all VMs in the group. This is not a conflict because device name of a particular disk is seen from guest OS of each specific VM and is local to that VM.
Other parameters are the same as those for creating a VM.
Then, create a group of 4 (or 100, or 1000+) VMs, based on this template:
gcloud compute instance-groups managed create GROUP_NAME \
--zone ZONE \
--template TEMPLATE_NAME \ # name of the instance template that you have just created
--size 4 \ number of VMs that you need to create
The group creates multiple similar VMs, based on your template, much faster than you would do it by iterating creation of standalone VMs.
|
[
"stackoverflow",
"0053945371.txt"
] | Q:
python flask before_first_request_funcs
I want to make use of @app.before_first_request_funcs to run couple of functions as recurring tasks before first request to my app.
Can anyone please give me an example usage of @app.before_first_request_funcs?
from flask import Flask
import threading
import time
app = Flask(__name__)
def activate_job():
def run_job():
while True:
print("recurring task")
time.sleep(3)
thread = threading.Thread(target=run_job())
thread.start()
def activate_job2():
def run_job2():
while True:
print("recurring task2")
time.sleep(3)
thread = threading.Thread(target=run_job2())
thread.start()
@app.after_first_request(activate_job())
@app.before_first_request(activate_job2())
@app.route('/')
def home():
return {"action" : "This has done something"}
if __name__ == '__main__':
print(app.before_first_request_funcs)
app.run()
A:
As per the documentation, you should use @app.before_first_request to do what you want.
from flask import Flask
app = Flask(__name__)
def some_func(some_arg):
print('coucou')
# @app.before_first_request(some_func)
@app.route('/')
def home():
return {"action" : "This has done something"}
if __name__ == '__main__':
print(app.before_first_request_funcs)
app.run()
You can see the behavior of the method before_first_request_funcs that is not a decorator by commenting and uncommenting the decorator before_first_request.
If it is commented, it'll print an empty list, and if you uncomment the line, it'll return a list of one element containing the function some_func object (for me, it was [<function some_func at 0x0000021393A0AD90>]).
|
[
"stackoverflow",
"0027231640.txt"
] | Q:
Force binding update when doing something on a non-focusable control
I have a problem which I've solved in a non-elegant way, and was wondering if there's any better solution.
I have a View which may have textboxes that only update their binding when losing focus (their bound properties use UpdateSourceTrigger=LostFocus). This is "almost" correct... I could set the binding's UpdateSourceTrigger to PropertyChanged and I wouldn't have a problem and everything would work as expected... however, there is some potentially computationally expensive stuff happening on the updating of those bound properties (involving deep checkings on the edited object which could potentially get long) so I actually only want to update the binding after I'm done with the editing.
This is posing problem with toolbars, since their buttons are not focusable, so clicking on them (and issuing the command) doesn't actually make the textbox lose focus so when the command is executed, the binding hasn't updated (think of a entity editing view, with a toolbar 'Save' button, that when clicked calls a save command which actually saves the entity. In this case, the entity would be saved with the value of the textbox before it lost the focus)
I could check the bindings before raising the command and update the source (this is what I'm doing now), however that means, either:
Having access to the bindings (or the controls) where the command is executing. This is discarded for the complete unelegance of the solution. The command action is defined on some other library which should be WPF-agnostic.
Executing the command on a code-behind event handler and perform the binding update (or just set the focus to something else and let WPF update the source) before raising the command. This is what I'm doing now and it's what I don't like (I'd prefer assigning the command directly to the toolbar button if there's other solution).
Have the View interface have a "ForceEndEdit()" which the View executes and call it whenever I'm performing some operation which might pose this problem. I find this rather odd and would prefer not to do it.
Is there any way to tell WPF to update the bindings "whenever the user calls a command -or clicks a button- outside the control not necessarily losing focus"? In case there's not, is there any solution any of you have found to this problem that is more elegant than the proposed above and I might have not thought of?
As I said, triggering the binding update OnPropertyChanged (which is what I've seen proposed to similar -though not identical- questions) is not a good enough solution in this particular case.
PS: this would not be only for textboxes, but any kind of editing control (datepickers, range pickers, etc.) and those controls might be third-party and I wouldn't necessarily have access to their source code.
PS2: I'm using .NET 4.5
A:
If you are doing something computationally expensive during OnPropertyChanged() with UpdateSourceTrigger=PropertyChanged you should consider using Delay in the Binding so that the Binding updates only once the user has stopped entering a value in the control.
This could solve your problem because it's interaction time-based rather than relying on some other event/occurance before initiating the update. This property is new in .NET 4.5 which is why I asked what version of .NET you were using.
|
[
"stackoverflow",
"0008897817.txt"
] | Q:
-[NSWorkspace openFile:withApplication:] wait for opening Application
In my Obj-C App I'm using the following code to open a file in Pages (or any other Application):
[[NSWorkspace sharedWorkspace] openFile:theUrl withApplication:@"Pages"];
Mainly when bigger files are opened, this may take a few seconds to finish.
So I want my Application to wait for Pages until it completely opened the file.
The following code is how I would love to do it:
[[NSWorkspace sharedWorkspace] openFile:theUrl withApplication:@"Pages" onFinish:@selector(pagesfinishedopening)];
Of course I could simply use the sleep() function, but this would slow down the app on small files and would not work when the files are bigger than excepted.
I already tried something with the NSApplication, but then the opening of the file in Pages is not respected, only the start of the target application can be monitored.
Any Ideas?
A:
You can listen for notifications arriving from NSWorkSpace like shown below
- (void)myMethod {
[[[NSWorkspace sharedWorkspace] notificationCenter] addObserver:self
selector:@selector(appDidLaunch:)
name:NSWorkspaceDidLaunchApplicationNotification
object:nil];
[[NSWorkspace sharedWorkspace] openFile:theUrl withApplication:@"Pages"];
}
- (void)appDidLaunch:(NSNotification*)notification {
NSLog(@"app info: %@", [notification userInfo]);
}
|
[
"stackoverflow",
"0059878160.txt"
] | Q:
Partition list of tuples based on a value within each tuple
I am trying to sort a set of data in to 2 separate lists, fulltime and parttime. But it doesn't seem to be working. Can somebody point to where I'm getting this wrong?
data = [(['Andrew'], ['FullTime'], [38]),
(['Fred'], ['PartTime'], [24]),
(['Chris'], ['FullTime'], [38])]
def sort(var1, datadump):
positionlist = []
for b in range(0, len(datadump)):
temp2 = datadump[b][1]
if (temp2 == var1):
positionlist.append(datadump[b])
return (positionlist)
FullTimeList = sort("FullTime", data)
PartTimeList = sort("PartTime", data)
print(FullTimeList)
print(PartTimeList)
A:
This is solved by altering
if (temp2 == var1):
to
if (temp2[0] == var1):
This is because the elements within each tuple are lists holding a string, not the strings themselves.
This problem could also be solved using two list comprehensions:
FullTimeList = [x for x in data if x[1][0] == 'FullTime']
PartTimeList = [x for x in data if x[1][0] == 'PartTime']
A:
Not an answer: just a suggestion. Learn how to use the python debugger.
python -m pdb <pythonscript.py>
In this case, set a breakpoint on line 9
b 9
Run the program
c
When it breaks, look at temp2
p temp2
It tells you
['FullTime']
Look at var1
p var1
It tells you
'FullTime'
And there is your problem.
|
[
"stackoverflow",
"0060365602.txt"
] | Q:
What is causing the empty area to the right of the image on this web-page?
I am trying to display a div element that contains image element and shows scrollbars when either the image is or grows too large to fit on the screen. This div element is contained in a parent div that is used to horizontally center its contents, which besides the div and its image, already mentioned, are two other div elements, one on the left side of the image-div and one on right side of the image-div.
However, when the image is not wide enough an empty area shows on the right side of the image.
I don't want to increase the image size. I want the div that encloses the image to shrink to fit the image when the image is displayed with a width less than 100%. When the image is 100% or grater then I want the parent div to grow, but no larger than fits the screen. Specifically, I don't want the image to grow so large that it causes the web-page to begin scrolling off the bottom of the page.
Here is code that shows what I'm talking about.
div {
border: thin solid black;
position: relative;
}
div:first-of-type {
display: inline-block;
padding: 10px;
}
div:first-of-type > div {
padding: unset;
}
div:first-of-type > div:first-of-type {
display: table;
margin: 0 auto;
}
div:first-of-type > div:first-of-type > div { display: table-cell }
div:first-of-type > div:first-of-type > div:nth-of-type(2) {
overflow: scroll;
background-color: gray
}
div:first-of-type > div:first-of-type > div,
div:first-of-type > div:last-of-type > div {
vertical-align: middle;
padding: 0
}
img { width: 50%;
min-width: 50px;
min-height: 50px;
}
<div>
<div>
<div><div><</div></div>
<div>
<img src="https://townsquare.media/site/341/files/2012/11/tumblr_ls6ujhB6wV1qfq9oxo1_5001.jpg?w=980&q=75">
</div>
<div><div>></div></div>
</div>
</div>
At the bottom of this question are two pictures, the first is what I see when I run the code in the snippet and the second is what I want to see the code produce. Notice that the gray area is gone in the second picture and the box enclosing the image and controls has shrunk to fit the content and is centered horizontally.
Can someone please help with the css or html code. There isn't any javascript, and likely won't need to be any.
Incidentally, when this part is working, I'm going to add the SpryMap plug-in to allow the user to scrolled the image when it is larger than the img element's area. However, when the image's zoom level is under 100% then I want the image to be shown centered on the screen between the < and > controls, without scrollbars unless the image causes an overflow to occur, all one line.
Thank you.
A:
Please see this CodePen example that solves the empty space problem. The solutions addresses the gap between the right side of the image and the vertical scrollbar to the right of the image, but this is only really a problem because the overflow/overflow-y is set to scroll rather than auto/hidden. The gap also prevents the > right-pointing arrow from being against the right side of the image and the proper horizontal centering of the image and surrounding controls. These problems are solved when the image's height/width are assigned pixel values, not % values.
function inputChange( This ) {
if( This.checkValidity() ) {
var img = document.querySelector( 'img' );
var div = img.parentElement;
var liveStyle = window.getComputedStyle( div );
var maxHeight = parseInt( liveStyle.getPropertyValue( 'max-height' ) );
var maxWidth = parseInt( liveStyle.getPropertyValue( 'max-width' ) );
var v = ~~This.value;
var h = ( img.naturalHeight * v ) / 100;
var w = ( img.naturalWidth * v ) / 100;
div.style.height =
img.style.height = h + 'px';
div.style.width =
img.style.width = w + 'px';
div.style.overflowX = ( ( w <= maxWidth ) ? 'hidden' : 'scroll' );
div.style.overflowY = ( ( h <= maxHeight ) ? 'hidden' : 'scroll' );
}
}
|
[
"cooking.stackexchange",
"0000091755.txt"
] | Q:
'Caramelization' of tomato sauce in slow cooker
For the last few years I've made my tomato sauces in a slow cooker ('crock pot'), or actually, in a machine that is sold as a 'plate warmer' but works great for low temperature cooking. I pre-heat the ingredients on my stove, then transfer it to the slow cooker for 10 or 14 hours to let all the flavors blend, then I puree and can the result. When I first learned this technique, I was told not to stir the sauce, because the long cooking time makes the sugars at the top caramelize and that would bring out a great sweet flavor. Indeed the top, after being in the cooker for that long, browns a bit, and the flavor is great.
However, recently I was learning a bit more about caramelization to understand my baking better, and it turns out that there are no sugars that caramelize at temperatures < 110 °C. So now I'm wondering - is this caramelization of my tomato sauce just a myth? The machine only goes up to 90 °C. I've checked the temperature at various depths in my sauce with an infrared thermometer, and indeed the temperature is nowhere higher than that. Anyone have more than anecdotal information on the chemistry of making tomato sauce?
A:
Short answer: if it doesn't get heated to the caramelization temperature then it does not caramelize. The science is here, and it says you need at least 110 °C for fructose.
Browning in your case is probably not caramelization, but a Maillard reaction, which
is a chemical reaction between an amino acid and a reducing sugar, usually requiring the addition of heat.
Maillard reaction can happen at lower temperatures if given enough time.
Lifted directly from this very useful answer that cites the excellent Harold McGee's book On Food and Cooking (emphasis mine):
There are exceptions to the rule that browning reactions require temperatures above the boil. Alkaline conditions, concentrated solutions of carbohydrates and amino acids, and prolonged cooking times can all generate Maillard colors and aromas in moist foods. For example, alkaline egg whites, rich in protein, with a trace of glucose, but 90% water, will become tan-colored when simmered for 12 hours. The base liquid for brewing beer, a water extract of barley malt that contains reactive sugars and amino acids from the germinated grains, deepens in color and flavor with several hours of boiling. Watery meat or chicken stock will do the same as it's boiled down to make a concentrated demiglace. Persimmon pudding turns nearly black thanks to its combination of reactive glucose, alkaline baking soda, and hours of cooking; balsamic vinegar turns nearly black over the course of years!
So in your temperature range:
~212-300 °F (100-150 °C) - Maillard gets slower as temperature goes lower, generally requiring many hours near the boiling point of water
~130-212 °F (55-100 °C) - Maillard requires water, high protein, sugar, and alkaline conditions to advance noticeably in a matter of hours; generally can take days
|
[
"stackoverflow",
"0047729043.txt"
] | Q:
PHP posting error
I keep getting a Cannot POST /app/newpost/newpost.php error when i try to post a HTML form and retrieve the data from another HTML page
HTML for the form is:
<div style="text-align:center">
<h1 style="font-size:36px; font-family:verdana;">
Create a new forum post
</h1>
</div>
<form action="newpost.php" method="POST" style="text-align:center">
<P style="font-size:24px; font-family:verdana;">Title: <input type="text" name="title" style="font-size:24px; font-family:verdana;"></P>
<P style="font-size:24px; font-family:verdana;">Description: <input type="text" name="desc" style="font-size:24px; font-family:verdana;"></P>
<input type="submit" action="window.location.href='/app/forum/forum.component.html'" style="background-color: #4CAF50; border: none; color: white; padding: 15px 32px; text-align: left; text-decoration: none; display: inline-block; font-size: 16px; margin: 4px 2px; cursor: pointer;">
</form>
PHP file is
<?php
if (isset($_POST["submit"]))
{$title = $_POST['title'];
$desc = $_POST['desc']
}
?>
im still very new to PHP someone pls help!
And how to retrieve the posted data on another html page?
A:
You need a web server with PHP installed to test/run your PHP script. You cannot have it run from your file system directory. Depending on your environment you might want to installed something called WAMP, XAMP or MAMP.
|
[
"stackoverflow",
"0021402745.txt"
] | Q:
Huge XML to Model in C#
I am working with a huge XML using a tool in VS2010 able to convert it into a Model (.cs class). Now in order to read that XML, I am parsing through the tags and filling in my objects in the Model.
Just want to know if I am on the right track or there's a easier way to now load the Model with xml data
A:
You can use the XmlSerializer class to do this for you.
using(StreamReader reader = File.OpenText("FilePath"))
{
XmlSerializer serial = new XmlSerializer(typeof(MyModel));
MyModel model = serial.Deserialize(reader);
}
|
[
"stackoverflow",
"0058610260.txt"
] | Q:
C++ pass parameter pack to std::map results in error C3245
I try to call functions hold in a map (to achieve reflection), with passed arguments as parameter pack which might look a bit strange. I want to get it to run anyway. Currently I end up with the following error:
main.cpp(36): error C3245: 'funcMapA': use of a variable template requires template argument list
main.cpp(23): note: see declaration of 'funcMapA'
Here is my minimum (not)working example:
#include <functional>
#include <map>
#include <string>
#include <sstream>
#include <iostream>
#include <iterator>
#include <vector>
#include <utility>
void DoStuff_1(int i) {
std::cout << "DoStuff_1 " << i << "\n";
}
void DoStuff_2(int i, int k) {
std::cout << "DoStuff_2 " << i << ", " << k << "\n";
}
void DoStuff_3(int i, int k, int l) {
std::cout << "DoStuff_3 " << i << ", " << k << ", " << l << "\n";
}
template <typename ... Ts>
std::map<std::string, std::function<void(Ts&& ... args)>> funcMapA = {
{"DoStuff_1", [](Ts&& ... args) {DoStuff_1(std::forward<Ts>(args)...); }},
{"DoStuff_2", [](Ts&& ... args) {DoStuff_2(std::forward<Ts>(args)...); }},
{"DoStuff_3", [](Ts&& ... args) {DoStuff_3(std::forward<Ts>(args)...); }}
};
std::map<std::string, std::function<void(int, int, int)>> funcMapB = {
{"DoStuff_1", [](int x, int y, int z) {DoStuff_1(x); }},
{"DoStuff_2", [](int x, int y, int z) {DoStuff_2(x, y); }},
{"DoStuff_3", [](int x, int y, int z) {DoStuff_3(x, y, z); }}
};
int main(int argc, char** argv) {
funcMapA["DoStuff_" + std::to_string(3)](1, 2, 3); //Failing
funcMapB["DoStuff_" + std::to_string(3)](1, 2, 3); //Working
getchar();
return 0;
}
How can I get this (funcMapA) to work?
A:
There are 2 problems:
First: You have to provide the template arguments like this:
funcMapA<int,int,int>["DoStuff_" + std::to_string(3)](1, 2, 3);
Second:
But if you do this, your template implementation will fail, because:
{"DoStuff_1", [](Ts&& ... args) {DoStuff_1(std::forward<Ts>(args)...); }},
{"DoStuff_2", [](Ts&& ... args) {DoStuff_2(std::forward<Ts>(args)...); }},
{"DoStuff_3", [](Ts&& ... args) {DoStuff_3(std::forward<Ts>(args)...); }}
you forward 3 arguments to DoStuff1 and DoStuff2 which is not what you want.
|
[
"stackoverflow",
"0050789720.txt"
] | Q:
Assembly type name that can run in an executable file without an assembler
I was attempting to learn the code that is processed natively by the processor, or the machine code, because I was strongly unfulfilled with the difficulty it took to construct an interpreted programming language. Instead of creating a practical programming language instead of a esolang, I set out to make a compiled language (EDIT: As SO has pointed out, I was extremely flawed in doing this)
However, when I started to learn about assembly and machine code, I realized that
machine code would vary from operating system. Knowing this, I changed my goal to learning machine code to learning machine code for Windows/Intel Core processor (EDIT: Which, again, you pointed out was a really dumb idea).
Then, I realized when I was trying to learn how to make a .exe file (I was
working on it for 2 years, so I just looked up anything related to machine code because I was desperate), I saw a post on WikiHow. It wasn't binary or the Unicode characters corresponding to the binary numbers, it was assembly (EDIT: Which, again, was a bad idea)!
This is how I realized (mistakenly) that I could just put assembly language in a .exe file for it to work. Unfortunately, the WikiHow example didn't work, but I still have a feeling that some type of assembly will work.
THE PROBLEM: The problem is that I do not know which type of assembly will be able to work in a .exe file (or an executable file) on my computer.
THE QUESTION: What type of assembly can be run from an executable file without any compilers/assemblers compiling the assembly code? (Using Windows 64-bit, Intel Core i5-6400T CPU)
WHAT I HAVE TRIED SO FAR: Note: If I gave the full list of everything I've tried, you would get tired of reading this. So I will be saying only a portion of what I have tried so far to solve the problem.
Have tried the Intel documentation.
Have tried looking up machine code tutorials
Have tried using 6502 assembly in a Commodore 64 emulator, looking at the machine code it generated and running the machine code it returned in an executable file.
Have tried looking up assembly tutorial
Have tried OllyDbg to disassemble file for me so that I could understand assembly mnemonics turned into machine code
Looked up assembly tutorial
Looked up machine code tutorial
Looked up .exe
Looked up Intel Core documentation
Help would really be appreciated.
A:
THE PROBLEM: The problem is that I do not know which type of assembly
will be able to work in a .exe file (or an executable file) on my
computer.
THE QUESTION: What type of assembly can be run from an executable file
without any compilers/assemblers compiling the assembly code? (Using
Windows 64-bit, Intel Core i5-6400T CPU)
There are exceptions that feel like this happens. But no assembly language executes directly. Assembly language is meant to be a human readable/writeable form of machine code. It is very much more than that to make it more usable, but some percentage of your code is syntax that maps to an instruction, ideally one to one line of asm to a single machine code instruction.
Compilers, C, etc compile to assembly language then the compiler calls the assembler which turns that into an object file, and then depending on how you used the compiler can/will call the linker to turn that into the final executable that the operating system understands.
Operating systems will tend to support a very limited number of binary formats. The binary formats for the well known operating systems, windows, linux, etc are trivial to google and find the details on, takes seconds to land on a page with the details. Even if using the same file format, elf for example, the operating system is its own thing, it has rules as to what the binary must contain and how a binary must run for that operating system. Take the same hardware, the same exact physical PC, run DOS, Linux, Windows, OSX, and the binary formats and the rules for for what those binaries contain, in particular the system calls into the operating system and what you wrap those with, vary. The assembler and linker which already exist are aware of the binary format for that operating system and target instruction set. If you want to make a language then you start with the front end, and then do the middle, the most complicated part of the whole project. Once you get past the middle, turning the high level language into digestible atomic operations, that can be then ported into one or more instruction sets through the backend. Operating system calls are handled by libraries usually not necessarily the language, JAVA, Python and some others being exceptions. So a printf in C links into a C library which has target and operating system specific assembly language to bridge the layer into the operating system, with some percentage (for printf that is HUGE) of code that is ideally in that same language and compiled then or usually at some time in the past into a linkable library.
Honestly it sounds like you are not ready to make a compiler you need to learn some basics, by examining some simple tools that are small enough to be understandable. Find course material or an online/free class (or book/books) that covers these basic tools topics, assembler and linker, then compiler. Look at a language like pascal or ada which are somewhat ridid, easier to parse and turn into something vs say C/C++. Dont look at gnu or llvm or other big projects for educational reasons, they are not the right path. Once you get into gnu you find it is barely held together with duct tape and bailing wire. LLVM has some nice documentation which doesnt match the actual tool, maybe many years ago but not anymore. And as time passes it is also accumulating duct tape and bailing wire as well, it will take a while to catch up to gnu in that respect, but I expect it will eventually, its the nature of these kinds of projects.
Many if not all computer science programs have a compiler class, in order for the students to have any kind of chance at success within a semester the language to compile and the use of existing tools is tuned for this. Go find some of these classes (google is your friend), and here and there on github or elsewhere you find the occasional student that posts their code. This is often a case of thinking they hit a home run, but is usually more of I barely got it to work. But in either case home run or other, one assumes they passed the class with that solution, so it is in theory digestible as it was a semesters worth of work, tens of hours.
short answer:
You cannot execute assembly language, processors execute machine code and the machine code is PROCESSOR specific not operating system.
Binary files, .exe, elf, coff, com, ihex, srec, etc are not specific to the operating system necessarily, but an operating system will have a limited set of, possibly only one, file format they support.
There exists an assembler and linker for your operating system that know the target machine code and know the executable file format. As with other compiler authors, if your desire here is to invent a new language and make a compiler for it, then compile to assembly language and let those tools do the rest. This is called a toolchain, a chain of tools (compiler, assembler, linker). You are working the compiler tool in the chain.
|
[
"math.stackexchange",
"0002094119.txt"
] | Q:
Find n of Geometric sequence
I need to find the $n$ of a geometric sequence, this formula is to find the $n$th term,
$n$th = $ar ^{n-1}$;
so I need $n$ to be on the left instead of $n$th, I have all the other variables except for $n$.
I want to know the position of a number in a geometric sequence, and $n$th stands for the number which I know already, also I know $a$ and $r$ already, but I don't know the position of my $n$th term, which is $^n$ .
For example ==> $[1, 2, 4, 8, 16]$ how can I get the position of $8$ for example, which should be 4 based on that sequence.
Thanks
A:
If $a_n = ar^{n-1}$ then $n = \log_r(a_n/a) + 1$.
|
[
"stackoverflow",
"0039889055.txt"
] | Q:
How to download newer data today and yesterday from FTP?
I want download newer data from my FTP by name file with variable yesterday and today, structure file like this
Daily_(City)_(yyyymmdd).xlsx
I have tried this code
daily.bat
winscp.exe /console /script=daily.txt
daily.txt
::: Begin set date
for /f "tokens=1-4 delims=/-. " %%i in ('date /t') do (call :set_date %%i %%j %%k %%l)
goto :end_set_date
:set_date
if "%1:~0,1%" gtr "9" shift
for /f "skip=1 tokens=2-4 delims=(-)" %%m in ('echo,^|date') do (set %%m=%1&set %%n=%2&set %%o=%3)
goto :eof
:end_set_date
::: End set date
set /a today=%dd%
set /a yesterday=%dd%-1
@echo off
open [email protected]
get -neweronly "/Reg8/Kota/2016/Daily/Daily_Makassar_%yy%%mm%%today%.xlsx" "D:\FTP\Makassar\2016\daily"
get -neweronly "/Reg8/Kota/2016/Daily/Daily_Makassar_%yy%%mm%%yesterday%.xlsx" "D:\FTP\Makassar\2016\daily"
pause
If I run this script not happen...
A:
You are combining Windows and WinSCP commands in a single file. That's not possible. Start with reading the guide to automating file transfers from FTP server or SFTP server.
And your script is too complicated, because you do not make use of the WinSCP %TIMESTAMP% syntax.
A simple way (daily.txt):
open ftp://user:[email protected]/
get -neweronly "/Reg8/Kota/2016/Daily/Daily_Makassar_%TIMESTAMP#yyyymmdd%.xlsx" "D:\FTP\Makassar\2016\daily"
get -neweronly "/Reg8/Kota/2016/Daily/Daily_Makassar_%TIMESTAMP-1D#yyyymmdd%.xlsx" "D:\FTP\Makassar\2016\daily"
(only this, discard all the other code from your daily.txt)
The %TIMESTAMP#yyyymmdd% will resolve to 20161006.
The %TIMESTAMP-1D#yyyymmdd% will resolve to 20161005 (as of 2016-10-06).
You need WinSCP 5.9 and newer for this.
Also in general, you should call the winscp.com from a batch file, not the winscp.exe /console.
|
[
"serverfault",
"0000556258.txt"
] | Q:
Why is SPDY breaking 'Vary: Accept-Encoding' in Nginx 1.4.3?
I've compiled Nginx 1.4.3 from source with the SPDY module.
However when SPDY is enabled, it seems to break my 'Vary: Accept-Encoding' header.
My Nginx Configuration:
./configure
--conf-path=/etc/nginx/nginx.conf
--pid-path=/var/run/nginx.pid
--error-log-path=/var/log/nginx/error.log
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp
--http-log-path=/var/log/nginx/access.log
--with-http_ssl_module --prefix=/usr
--add-module=./nginx-sticky-module-1.1
--add-module=./headers-more-nginx-module-0.23
--with-http_spdy_module
My `nginx.conf' file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#Compression Settings
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml;
# Some version of IE 6 don't handle compression well on some mime-types,
# so just disable for them
gzip_disable "MSIE [1-6].(?!.*SV1)";
# Set a vary header so downstream proxies don't send cached gzipped
# content to IE6
gzip_vary on;
proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=qnx-cache:10m inactive=24h max_size=1g;
proxy_temp_path /var/www/nginx_cache/tmp;
server_tokens off;
include /etc/nginx/conf.d/*.conf;
map $geo $mapping {
default default;
US US;
DE DE;
CA CA;
GB GB;
}
geo $geo {
default default;
include geo.conf;
}
upstream default.backend {
# sticky;
server 192.168.0.5:8080;
server 192.168.0.6:8080;
server 192.168.0.7:8080;
}
upstream mysite.backend {
sticky name=servIDTrack hash=sha1;
server 192.168.0.5:8080 weight=10 max_fails=3 fail_timeout=10s;
server 192.168.0.6:8080 weight=10 max_fails=3 fail_timeout=10s;
server 192.168.0.7:8080 weight=10 max_fails=3 fail_timeout=10s;
}
server {
listen 80;
server_name secure.mysite.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name secure.mysite.com;
more_set_headers "Server: X-nginx/v1.1 [LB01]";
ssl_certificate /etc/nginx/ssl/secure_mysite_com_ssl.cert;
ssl_certificate_key /etc/nginx/ssl/secure_mysite_com_ssl.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-RC4-SHA:ECDHE-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:RC4-SHA;
ssl_prefer_server_ciphers on;
keepalive_timeout 120;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://mysite.backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_http_version 1.1;
add_header Strict-Transport-Security "max-age=31556926; includeSubdomains";
# Cache
proxy_cache qnx-cache;
proxy_cache_valid 200 301 302 120m;
proxy_cache_valid 404 1m;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache_key "$scheme$host$request_uri";
}
}
}
Header results with SPDY enabled/disabled:
url: mypage.php (SPDY enabled)
HTTP/1.1 200 OK
cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
content-encoding: gzip
content-type: text/html; charset=utf-8
date: Wed, 20 Nov 2013 13:39:30 GMT
expires: Thu, 19 Nov 1981 08:52:00 GMT
pragma: no-cache
server: X-nginx/v1.1 [LB01]
status: 200
strict-transport-security: max-age=31556926; includeSubdomains
version: HTTP/1.1
x-cache-status: MISS
url: mypage.php (SPDY disabled)
HTTP/1.1 200 OK
Date: Wed, 20 Nov 2013 13:45:00 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Server: X-nginx/v1.1 [LB01]
Strict-Transport-Security: max-age=31556926; includeSubdomains
X-Cache-Status: MISS
Content-Encoding: gzip
url: mystyle.css (SPDY enabled)
HTTP/1.1 200 OK
date: Wed, 20 Nov 2013 12:53:49 GMT
content-encoding: gzip
last-modified: Mon, 18 Nov 2013 22:09:32 GMT
server: X-nginx/v1.1 [LB01]
x-cache-status: HIT
strict-transport-security: max-age=31556926; includeSubdomains
content-type: text/css
status: 304
expires: Wed, 18 Dec 2013 22:42:35 GMT
cache-control: max-age=2592000
version: HTTP/1.1
url: mystyle.css (SPDY disabled)
HTTP/1.1 200 OK
Date: Wed, 20 Nov 2013 13:45:01 GMT
Content-Type: text/css
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Last-Modified: Mon, 18 Nov 2013 22:09:32 GMT
Cache-Control: max-age=2592000
Expires: Wed, 18 Dec 2013 22:10:13 GMT
Server: X-nginx/v1.1 [LB01]
Strict-Transport-Security: max-age=31556926; includeSubdomains
X-Cache-Status: HIT
Content-Encoding: gzip
As you can see, when SPDY is enabled, the Vary: Accept-Encoding headers disappear.
Is this an issue with the way my nginx.conf is configured?
A:
SPDY (and HTTP/2.0) require user agents to support compression and that draws the Vary: Accept-Encoding header useless. That’s why nginx drops the header.
|
[
"stackoverflow",
"0047497088.txt"
] | Q:
match article title with tags listed in another table
I need a query in mysql that can list out matched tag_names in one table with article_title column in another table
tags table
------------
tag_id tag_name
--------------
1 travel
2 tickets
3 business
4 america
article table
-------------
article_id article_title
--------- --------------
1 travel tips to america
2 cheap tickets for favorite destinations
3 prices for business class tickets to america
expected output
--------------
article_id tag_id tag_name
---------- ------- ----------
1 1 travel
1 4 america
2 2 tickets
3 3 business
3 2 tickets
3 4 america
A:
The query should be as follows:
SELECT a.article_id, t.tag_id, t.tag_name
FROM article a
JOIN tags t
ON a.article_title LIKE CONCAT('%', t.tag_name, '%')
ORDER BY a.article_id;
However, if you want to tokenize the tags with space, you should replace line 4 of the query with
ON a.article_title LIKE CONCAT('% ', t.tag_name, ' %')
This will not consider the tag america in the titles like
The american dream is a national ethos of the United States
|
[
"dba.stackexchange",
"0000008845.txt"
] | Q:
Can't run query using "localhost\sqlexpress"
I am trying dynamically build an SQL Statement that includes linked server. That works. When I use "localhost" to reference the current local machine SQL Server, that doesn't work.
For example:
Select * From [LOCALHOST\SQLEXPRESS].[My_Database].[dbo].[My_Table]
However, if I specify my machine name instead, it will work.
Select * From [My_Machine\SQLEXPRESS].[My_Database].[dbo].[My_Table]
Why can't I reference Localhost?
A:
You can only reference servers that are listed under Server Objects -> Linked Servers as well as the local server via what you get back from @@SERVERNAME. Four part naming does not trigger a NETBIOS / DNS lookup. If you are referencing the local machine anyway, why not just use three part naming?
|
[
"stackoverflow",
"0045770832.txt"
] | Q:
Generic return type for index signature in TypeScript
Here's the problem
interface Prop<T> {
value: T;
}
class Property<T> implements Prop<T> {
value: T;
constructor(value: T) {
this.value = value;
}
}
class Node {
this.props: { [index: string]: Prop<T> } // how do you define T?
}
T can't be defined at class level here as needs as the intended use would be along the lines of
const strProp = node.props<string>['strProp'];
const numProp = node.props<number>['numProp'];
In other words, a Node can have various property types attached to it.
There doesn't appear to be anything in the docs about this (or perhaps I'm just not seeing it). Basically what I'm really looking for here is a generic indexer, for example:
this.props: <T>{ [index: string]: Prop<T> }
Does it exist?
Disclaimer - I'm not looking for workarounds, I understand there are ways around this but I'd like to know whether or not the support is there (I couldn't see any proposals or outstanding issues on the repo for this). There were some similar issues but nothing specific to this particular scenario.
A:
No, in TypeScript only functions (including methods) and types (classes, interfaces, type aliases) can be generic.
Additionally, it doesn't look like what you're asking for actually makes much sense. To see why, let's look at one of those workarounds you're not necessarily interested in:
abstract class Node {
abstract propsGetter<T>(index: string): Prop<T>;
}
Here we define a getter method which takes a type parameter T and a string parameter and returns a value of type Prop<T>. This is more or less the equivalent of an indexed property. Notice that it is the caller, not the implementer of this method that specifies both the type T and index.
You would, I suppose, call it like this:
declare const node: Node;
const strProp = node.propsGetter<string>('strProp');
const numProp = node.propsGetter<number>('numProp');
But wait, nothing stops you from calling it like this:
const whatProp = node.propsGetter<string>('numProp'); // no error
If you were expecting that the compiler would somehow know that the 'numProp' parameter would return a Prop<number> and that there would be an error, the compiler would disappoint you. The signature of propsGetter promises that it will return a Prop<T> for whatever value of T the caller wants, no matter what the index parameter is.
Unless you describe for the compiler some relationship between the type of index (presumably some collection of string literals) and the type of T, there's no type safety here. The type parameter does nothing for you. You might as well remove the type parameter and just do something like:
abstract class Node {
abstract propsGetter(index: string): Prop<{}>;
}
which returns a Prop<{}> value you have to type check or assert:
const strProp = node.propsGetter('strProp') as Prop<string>; // okay
const numProp = node.propsGetter('numProp') as Prop<number>; // okay
This is just as non-type-safe as the above but at least it is explicit about it.
const whatProp = node.propsGetter('numProp') as Prop<string>; // still no error but it's your responsibility
And then since we don't need the generic parameter you can indeed use an indexer:
class Node {
props: { [index: string]: Prop<{}> }
}
Recall,
Unless you describe for the compiler some relationship between the type of index (presumably some collection of string literals) and the type of T, there's no type safety here.
Do you have some kind of way to tell the compiler which property keys should return which property types? If so, this is looking like you don't actually want a pure string-indexed type but a standard object of the sort:
abstract class Node {
props: {
strProp: Prop<string>,
numProp: Prop<number>,
// ... others ...
[otherKeys: string]: Prop<{}> // default
}
}
Maybe one of those meets your use case... not that you care, as your disclaimer disclaims. In the case that you don't care, please ignore everything after my first sentence.
Hope that helps. Good luck!
|
[
"stackoverflow",
"0004874172.txt"
] | Q:
iPhone / iOS custom control
I would like to know how to create a custom iPhone control from scratch, or using an existing library or framework.
I have seen the three20 library, aswell as tapku and touch customs which are nice for specialised iOS controls such as table view etc but I'm talking about making totally custom, interactive controls here.
Lets say I wanted to make a dial control similar to the one from this app: http://store.apple.com/us/product/H2654LL/A.
Where would I start?
Would I subclass UIView and customize it?
Would I use quartz 2d?
Would I use OpenGL ES to draw something like this to the screen?
Can I still use IB to design/layout my custom view?
I'm just a bit confused which way to go here.
Yes - this question has been asked and answered a few times before, but I am yet to find a satisfactory answer which addresses the above points.
A:
What to subclass
Instead of UIView you would probably want to subclass UIControl. This class has functionality for the Target/Action pattern build in which you can use to respond to actions generated by your custom control. Most elements on UIKit like buttons and sliders inherit from UIControl for this specific reason.
Visualizing your subclass
Drawing really depends on what you want to achieve and what parts you want to animate. You can use images, draw using quartz or OpenGL depending on what you need or what you prefer. Just use the technique to achieve the desired effect in the most simplistic way. Multiple images can be used to handle different states (pressed, etc) or be used for a sprite animation. CALayers are nice to easily rotate or move.
No matter what technology you use, you would probably use incoming touch events to control the animation. In case of a dial control you would control the amount of rotation based on y coordinate movement for example.
To illustrate: I for example have used images if my control only needed to change when pressed for example: just swap the images. I also like to use CALayer a lot which gives you easy ways to generate borders, masks, gradients and a corner radius, all easily animated too.
Using in Interface Builder
With Cocoa on the desktop it was possible to build custom IB palettes for custom controls. iOS never had this functionality and I don't thing the IB plugins are available for Xcode 4.
So the only way to handle custom subclasses currently is by using a UIView in IB and setting the 'Custom class' field in the Identity Inspector to the name of your custom class. This way you have a view you can layout and size. In Interface Builder it's just a rectangle, when running your app the XIB will actually deserialize that view to your custom class.
When using a UIControl you get the target/action mechanisms for free. So you can wire up your touch events to any object in IB just like with any other standard UIKit control.
One thing to note: if you have custom - initWith....: selectors, those will not be called. Your class is deserialized from the XIB so you should use - initWithCoder:(NSCoder *)aDecoder; as initialization.
|
[
"stackoverflow",
"0033973775.txt"
] | Q:
Migrating style properties to separated .css file
Dear community members,
I need to put all:
style="display:none"
from all rows with:
<p class="text" style="display:none">
to the following stylesheet:
style.css
on the http://berdyanskaya56.ru/index.html.
Notable is that If I move
"display:none"
to
.text
in
style.css
than the descriptive titles don't change respectively to pictures sliding (i.e., use arrow on both sides of the website screen) on the page.
For final clarification:
I need to remove all
style="display:none"
from
<p class="text" style="display:none">
to
.text
at
style.css.
This will substantially help me to clean up the code on the website. After changes are made the <p> tags in html page will look like:
<p class="text">
If it requires further clarification, just post your comments below this post.
Thank you very much for your help in advance!
UPD: It will require also changing the function controlling display:none in JS. If you can help me with it, just post it below :-)!
A:
/* style.css */
/* set all `p` elements having `class` `"text"`
`display` property to `none`
*/
p.text {
display:none;
}
|
[
"stackoverflow",
"0036396217.txt"
] | Q:
How to bind a value to a variable using ng-init in a ng-repeat?
I am working on a project based on angular. I am facing a problem while initialising a variable in ng-repeat.I want to initialise a variable in ng-init and use it in ng-model.I am getting following error in console. any help will be appreciated
ionic.bundle.js:25510 Error: [$parse:syntax] Syntax Error: Token '{' invalid key at column 6 of the expression [key={{component.name}}] starting at [{component.name}}].
http://errors.angularjs.org/1.4.3/$parse/syntax?p0=%7B&p1=invalid%20key&p2=6&p3=key%3D%7B%7Bcomponent.name%7D%7D&p4=%7Bcomponent.name%7D%7D
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:13248:12
at Object.AST.throwError (http://localhost:8100/lib/ionic/js/ionic.bundle.js:26061:11)
at Object.AST.object (http://localhost:8100/lib/ionic/js/ionic.bundle.js:26048:16)
at Object.AST.primary (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25956:22)
at Object.AST.unary (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25944:19)
at Object.AST.multiplicative (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25931:21)
at Object.AST.additive (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25922:21)
at Object.AST.relational (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25913:21)
at Object.AST.equality (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25904:21)
at Object.AST.logicalAND (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25896:21)
at Object.AST.logicalOR (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25888:21)
at Object.AST.ternary (http://localhost:8100/lib/ionic/js/ionic.bundle.js:25874:21) <div ng-repeat="component in reportTemplate" ng-init="key={{component.name}}" class="inputFieldSection inputFieldTitle" ng-if="component.type == 'text'" data-ng-animate="1">
following is my code snippet
<div ng-repeat="component in reportTemplate" ng-init="key={{component.name}}" class="inputFieldSection inputFieldTitle" ng-if="component.type == 'text'">
<label class="item item-input">
<input type="text" name={{component.name}} ng-model=reportTemplateKeyData[key] ng-focus="clearValidation();" max-length="50" required placeholder="{{component.label}}">
</label>
<p ng-show="createReportForm[component.name].$error.required">Please Enter {{component.name}}</p>
</div>
A:
you should remove {{}}.
ng-init="key=component.name"
|
[
"tex.stackexchange",
"0000332775.txt"
] | Q:
Centered \paragraph section with line break
I tried the following code for getting a line break:
\RedeclareSectionCommands[afterskip=1sp]{paragraph,subparagraph}
However, I don’t have any idea how to get the paragraph centered. I tried \centering already, but to no avail.
A:
I am not sure if I understand what you want to do. If the headings of all section levels should be centered, redefine \raggedsection:
\documentclass{scrartcl}
\usepackage{blindtext}
\RedeclareSectionCommands[
afterskip=1sp,
beforeskip=-3.25ex plus -1ex minus -.2ex
]{paragraph,subparagraph}
\renewcommand*\raggedsection{\centering}
\begin{document}
\blinddocument
\end{document}
If only paragraph and subparagraph headings should be centered (I do not recommennd this), redefine \sectionlinesformat:
\documentclass{scrartcl}
\usepackage{blindtext}
\RedeclareSectionCommands[
afterskip=1sp,
beforeskip=-3.25ex plus -1ex minus -.2ex
]{paragraph,subparagraph}
\makeatletter
\renewcommand\sectionlinesformat[4]{%
\ifstr{#1}{paragraph}{\centering#3#4}{%
\ifstr{#1}{subparagraph}{\centering#3#4}{%
\@hangfrom{\hskip #2#3}{#4}%
}}}
\makeatother
\begin{document}
\blinddocument
\subparagraph{Test}
\blindtext
\end{document}
|
[
"stackoverflow",
"0056577467.txt"
] | Q:
Visual Studio- moving local database
I am working on an app in VS which includes a SQL database stored locally on my C drive. I can publish the app and it works fine on my own computer, but because the database is stored locally, I cannot run the app from any other computer (SQL exception 52). I would like to move the app to a network drive so it can be accessed by multiple users.
I have tried to move the database by changing the default database location in the SQL server object explorer. I’m wondering if I should have SQL server express LocalDB installed?
I have no programming training but have been dumped with this project at work as I have used VBA before, so I’m sorry if this question it stupid. I would really appreciate it if someone could point me in the right direction!
A:
When accessing your database, rather than using just the database name, you should specify the filepath aswell, like this:
Public Filename As String = $"[**FilePath**]ActiveFitness.accdb"
This tells the programme where to look for the database, and is better practice than relying on the default location.
|
[
"stackoverflow",
"0021250103.txt"
] | Q:
Restlet with Same Path and Different Verbs
I have the following interesting situation. I have one path with three verbs: GET, DELETE, POST. They correspond to three routes in Camel context. My observation is that if the three routes are in the same Camel Context, every works well. But if the routes are in different camel contexts, only one of them works. So far, I noticed that DELETE wworks and the two others stop working. My example context is below:
<camel:camelContext id="get-test" autoStartup="true">
<camel:route>
<camel:from uri="restlet:/path?restletMethod=DELETE"></camel:from>
<camel:transform>
<camel:constant>Hi Delete</camel:constant>
</camel:transform>
</camel:route>
<camel:route>
<camel:from uri="restlet:/path?restletMethod=GET"></camel:from>
<camel:transform>
<camel:constant>Hi Get</camel:constant>
</camel:transform>
</camel:route>
<camel:route>
<camel:from uri="restlet:/path?restletMethod=POST"></camel:from>
<camel:transform>
<camel:constant>Hi Post</camel:constant>
</camel:transform>
</camel:route>
</camel:camelContext>
So, the above is the working scenario. The scenario that does not work is below with three different contexts:
<camel:camelContext id="delete-test" autoStartup="true">
<camel:route>
<camel:from uri="restlet:/path?restletMethod=DELETE"></camel:from>
<camel:transform>
<camel:constant>Hi Delete</camel:constant>
</camel:transform>
</camel:route>
</camel:camelContext>
<camel:camelContext id="get-test" autoStartup="true">
<camel:route>
<camel:from uri="restlet:/path?restletMethod=GET"></camel:from>
<camel:transform>
<camel:constant>Hi Get</camel:constant>
</camel:transform>
</camel:route>
</camel:camelContext>
<camel:camelContext id="post-test" autoStartup="true">
<camel:route>
<camel:from uri="restlet:/path?restletMethod=POST"></camel:from>
<camel:transform>
<camel:constant>Hi Post</camel:constant>
</camel:transform>
</camel:route>
</camel:camelContext>
Maybe I am missing something in the camel spec that forbid this kind of configuration?
A:
Yes this is not supported. The logic that selects the route to process the message only uses the context path as part of the logic.
Not sure how easy it would be to add restletMethod as well as part of that selection logic. Feel free to log a JIRA ticket, and dive into the code to contribute. We love contributions:
http://camel.apache.org/contributing
|
[
"gaming.stackexchange",
"0000250909.txt"
] | Q:
Is it possible to spawn a Thaumcraft 4 hungry node in creative mode?
I'm doing some experiments in creative mode to determine if it's possible to energize a Thaumcraft 4 hungry node without converting it to a tainted node. I've seen people posting that they managed to accomplish this using AE2 formation planes in conjunction with stabilizers / transducers and redstone blocks, but I've been unable to duplicate what they've done in survival mode.
Hungry nodes are .. well, kind of rare, at least from the perception of someone deliberately trying to find one. In creative mode, a player can spawn a random aura node, but I've been unable to spawn a hungry node despite hundreds of attempts.
From what I can tell from the wikis (FTB/Thaumcraft 4) - there's no preferential biome for these to spawn. They just .. occasionally happen.
Is there anything I can do to deliberately spawn one of these in creative mode so I can experiment, or at least increase my chances (biome placement / etc)?
A:
You can use the /give command to spawn jarred nodes with specific NBT data, like this command, which gives a player a jarred hungry node with 100 terra:
/give <player> Thaumcraft:BlockJarNodeItem 1 0 {nodetype: 4, Aspects:[{amount:100, key: "terra"}], nodeid: "0:0:0:0"}
Simply place the jar in the world and unpack it like you would any other node.
In some cases your command will be too big to fit into the chat bar; I suggest using a command block to give the node to the nearest player (using the @p target specifier) if you run into that.
There are four tags that control the behavior of the spawned node:
nodetype determines what type of node it is, and ranges from 0 to 5.
Normal
Unstable
Sinister
Tainted
Hungry
Pure
nodemod determines whether it's a bright, pale, or fading node. If it's omitted, the node is normal.
Bright
Pale
Fading
Aspects is an array of aspects that the node has; it's defined like this:
Aspects:[{amount:<amount>, key:<aspect>},{amount:<amount2>, key:<aspect2>}]
Keys are the lowercase names of the aspects; amount is the amount of vis the node has of that type. For example, to make a node with 100 of each primal aspect, your Aspects tag should look like this:
Aspects:[{amount:100, key:"terra"},{amount:100, key:"aqua"},{amount:100, key:"perditio"},{amount:100, key:"ordo"},{amount:100, key:"ignis"},{amount:100, key:"aer"}]
I can't find anything on nodeid unfortunately, so I can only speculate on it. It appears to be information about where the node was when it was jarred, in the form dimension:x:y:z - you should be fine just leaving it at 0:0:0:0.
The bulk of this information was sourced from this Reddit comment and this page on the Thaumcraft 4 Wikia.
|
[
"stackoverflow",
"0007839157.txt"
] | Q:
Android database data retrieval not possible for me
Android SQLite Cursor Problem at cur3 = db3.rawQuery below is my logcat errors and entire class code. I am facing problem for retreiving data based on comparision on two tables and Primary key(pretest_id) of first pretestTable table. I am not understanding what is the wrong in my SQL query.
Logcat errors:
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.a1technology.remoteid/com.a1technology.remoteid.Screening}: android.database.sqlite.SQLiteException: near "WHERE": syntax error: , while compiling: SELECT tbl_pre_test.ID AS _id, tbl_pre_test.ddlTestingSession, tbl_pre_test.txtReason, tbl_pre_test.txthowmany, tbl_pre_test.txtques1, tbl_pre_test.rblques2a, tbl_pre_test.rblques2b, tbl_pre_test.rblques3, tbl_pre_test.txtques4, tbl_pre_test.rblques5, tbl_pre_test.rblques6, tbl_pre_test.rblques7, tbl_pre_test.rblques8, tbl_pre_test.rblques9, tbl_pre_test.ddlsick, tbl_pre_test.txtques11, tbl_pre_test.rblques12, tbl_pre_test.txtques13, tbl_pre_test.txtques14, tbl_pre_test.rblques15, tbl_pre_test.rblques16, tbl_pre_test.rblques17, tbl_pre_test.txtques18, tbl_pre_test.txtVCT, WHERE tbl_pre_test.ID =tbl_finger.template AND tbl_pre_test.pretest_id=?
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1622)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1638)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.ActivityThread.access$1500(ActivityThread.java:117)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:928)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.os.Handler.dispatchMessage(Handler.java:99)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.os.Looper.loop(Looper.java:123)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.ActivityThread.main(ActivityThread.java:3647)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at java.lang.reflect.Method.invokeNative(Native Method)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at java.lang.reflect.Method.invoke(Method.java:507)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at dalvik.system.NativeStart.main(Native Method)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): Caused by: android.database.sqlite.SQLiteException: near "WHERE": syntax error: , while compiling: SELECT tbl_pre_test.ID AS _id, tbl_pre_test.ddlTestingSession, tbl_pre_test.txtReason, tbl_pre_test.txthowmany, tbl_pre_test.txtques1, tbl_pre_test.rblques2a, tbl_pre_test.rblques2b, tbl_pre_test.rblques3, tbl_pre_test.txtques4, tbl_pre_test.rblques5, tbl_pre_test.rblques6, tbl_pre_test.rblques7, tbl_pre_test.rblques8, tbl_pre_test.rblques9, tbl_pre_test.ddlsick, tbl_pre_test.txtques11, tbl_pre_test.rblques12, tbl_pre_test.txtques13, tbl_pre_test.txtques14, tbl_pre_test.rblques15, tbl_pre_test.rblques16, tbl_pre_test.rblques17, tbl_pre_test.txtques18, tbl_pre_test.txtVCT, WHERE tbl_pre_test.ID =tbl_finger.template AND tbl_pre_test.pretest_id=?
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteCompiledSql.native_compile(Native Method)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteCompiledSql.compile(SQLiteCompiledSql.java:92)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteCompiledSql.<init>(SQLiteCompiledSql.java:65)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:83)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:49)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:42)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1356)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.database.sqlite.SQLiteDatabase.rawQuery(SQLiteDatabase.java:1324)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at com.a1technology.remoteid.Screening.onCreate(Screening.java:320)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1586)
10-20 21:55:28.203: ERROR/AndroidRuntime(2192): ... 11 more
Class Code:
static final String pretestTable="tbl_pre_test";
public static final String columnID="ID";
public static final String DDL_Testing_Session="ddlTestingSession";
public static final String Text_Reason="txtReason";
public static final String Text_Howmany="txthowmany";
public static final String Text_Ques1="txtques1";
public static final String RBL_Ques2a="rblques2a";
public static final String RBL_Ques2b="rblques2b";
public static final String RBL_Ques3="rblques3";
public static final String TXT_Ques4="txtques4";
public static final String RBL_Ques5="rblques5";
public static final String RBL_Ques6="rblques6";
public static final String RBL_Ques7="rblques7";
public static final String RBL_Ques8="rblques8";
public static final String RBL_Ques9="rblques9";
public static final String DDL_Sick="ddlsick";
public static final String TXT_Ques11="txtques11";
public static final String RBL_Ques12="rblques12";
public static final String TXT_Ques13="txtques13";
public static final String TXT_Ques14="txtques14";
public static final String RBL_Ques15="rblques15";
public static final String RBL_Ques16="rblques16";
public static final String RBL_Ques17="rblques17";
public static final String TXT_Ques18="txtques18";
public static final String pretest_id="PretestID";
public static final String TXT_Vct="txtVCT";
static final String fingerTable="tbl_finger";
public static final String fingerTableColumnID="ID";
public static final String Template="template";
static boolean addrow=false;
static int buttonCounter;
int requestCode;
private SQLiteDatabase db,db1,db2,db3;
private DopenHelper helper;
String TableName = "tbl_pre_test";
String TableName1 = "tbl_screening";
String TableName2 = "tbl_postscreen";
String TableName3 = "tbl_finger";
String gotDataScreening1,gotDataScreening2;
private String valuOfDate,textType,valueOfID,valueOfDDLTS,valueOfReason,valueOfHowmany,valueOftxtques1,valueOfrblques2a
,valueOfrblques2b,valueOfrblques3,valueOftxtques4,valueOfrblques5,valueOfrblques6,valueOfrblques7,valueOfrblques8
,valueOfrblques9,valueOfddlsick,valueOftxtques11,valueOfrblques12,valueOftxtques13,valueOftxtques14,valueOfrblques15
,valueOfrblques16,valueOfrblques17,valueOftxtques18;
//
// private int mYear;
// private int mMonth;
// private int mDay;
TextView PreTestView,ScreeningTextView,PostScreenTV;
private simpleefficientadapter arrayadapter11,arrayadapter22,arrayadapter33;
ListView mylist1;
ListView mylist2;
ListView mylist3;
ArrayList<String> prtestData;
ArrayList<String> screeningData;
ArrayList<String> postData;
TextView preTextView,screeTextView,postScreenTextView;
String Date111,Date222,Date333;
//Bundle bundle;
String s1;
String s2;
String s3;
@Override
protected void onPause() {
// TODO Auto-generated method stub
super.onPause();
db.close();
db1.close();
db2.close();
db3.close();
helper.close();
}
@Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
//openAndQueryDatabase();
//displayResultList();
setContentView(R.layout.screening);
helper=new DopenHelper(Screening.this);
db=helper.getWritableDatabase();
db1=helper.getWritableDatabase();
db2=helper.getWritableDatabase();
db3=helper.getWritableDatabase();
s1="Pree-Test";
s2="Screening";
s3="Post Screen";
//new_screening=(Button)findViewById(R.id.new_screening);
main_return=(Button)findViewById(R.id.main_return);
mylist1=(ListView)findViewById(R.id.prescreenlist);
mylist2=(ListView)findViewById(R.id.screeninglist);
mylist3=(ListView)findViewById(R.id.postscreenlist);
prtestData=new ArrayList<String>();
screeningData=new ArrayList<String>();
postData=new ArrayList<String>();
final Bundle bundle = this.getIntent().getExtras();
gotDataScreening1 = getIntent().getStringExtra("TransferedMenuData000");
gotDataScreening2 = getIntent().getStringExtra("TransferedMenuData111");
main_return.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View arg0) {
// TODO Auto-generated method stub
Intent main_return=new Intent(Screening.this,Menu.class);
startActivity(main_return);
}
});
preTextView = (TextView)findViewById(R.id.birth_text11);
screeTextView = (TextView)findViewById(R.id.birth_text12);
postScreenTextView =(TextView)findViewById(R.id.birth_text13);
preTextView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
Intent newIntent = new Intent(Screening.this, NewScreening.class);
bundle.putString("FinalDataScreen1", gotDataScreening1);
bundle.putString("FinalDataScreen2", gotDataScreening2);
newIntent.putExtras(bundle);
startActivityForResult(newIntent, requestCode);
}
});
screeTextView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent simpleIntent=new Intent(Screening.this,SimpleScreening.class);
bundle.putString("FinalDataScreen1", gotDataScreening1);
bundle.putString("FinalDataScreen2", gotDataScreening2);
simpleIntent.putExtras(bundle);
startActivityForResult(simpleIntent, requestCode);
}
});
postScreenTextView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
Intent postIntent=new Intent(Screening.this,PostScreening.class);
bundle.putString("FinalDataScreen1", gotDataScreening1);
bundle.putString("FinalDataScreen2", gotDataScreening2);
postIntent.putExtras(bundle);
startActivityForResult(postIntent, 1);
}
});
//+++++++++++++++++++++ To Know the SQLite DataBase Version Added By Murali ++++++++++++++++++++++++++++++
/*Cursor cursor = SQLiteDatabase.openOrCreateDatabase(":memory:", null).rawQuery("select sqlite_version() AS sqlite_version", null);
{
String sqliteVersion = "";
while(cursor.moveToNext()){
sqliteVersion += cursor.getString(0);
Log.v("SQL Version", cursor.getString(0));
}
}cursor.close();*/
//+++++++++++++++++ Code Ends Above Code for To Know SQLite Version +++++++++++++++++++++++++++++++
//+++++++++++++++++++++ Below COde For Retriving Data From DataBase ++++++++++++++++++++++++++
Cursor cur = db.rawQuery("SELECT PretestID,Date,txtVCT FROM " + TableName, null);
try {
db = this.openOrCreateDatabase("remoteid.db", MODE_PRIVATE, null);
if(cur != null )
{
if(cur.moveToFirst())
{
do {
String valuOfDate = cur.getString(cur.getColumnIndex("Date"));
String textType = cur.getString(cur.getColumnIndex("txtVCT"));
String valueOfID = cur.getString(cur.getColumnIndex("PretestID"));
//Toast.makeText(getApplicationContext(), valueOfID, Toast.LENGTH_SHORT).show();
prtestData.add(valuOfDate);
}while (cur.moveToNext());
}
}
}
catch(Exception e) {
Log.e("Error", "Error", e);
} finally {
if (db != null)
db.close();
}
cur.close();
Cursor cur1 = db1.rawQuery("SELECT Date FROM " + TableName1, null);
try {
db1 = this.openOrCreateDatabase("remoteid.db", MODE_PRIVATE, null);
if(cur1 != null )
{
if(cur1.moveToFirst())
{
do {
String valuOfDate1 = cur1.getString(cur1.getColumnIndex("Date"));
//Toast.makeText(getApplicationContext(), valueOfID, Toast.LENGTH_SHORT).show();
screeningData.add(valuOfDate1);
}while (cur1.moveToNext());
}
}
}
catch(Exception e) {
Log.e("Error", "Error", e);
} finally {
if (db1 != null)
db1.close();
}
cur1.close();
Cursor cur2 = db2.rawQuery("SELECT Date FROM " + TableName2, null);
try {
db2 = this.openOrCreateDatabase("remoteid.db", MODE_PRIVATE, null);
if(cur2 != null )
{
if(cur2.moveToFirst())
{
do {
String valuOfDate2 = cur2.getString(cur2.getColumnIndex("Date"));
//Toast.makeText(getApplicationContext(), valuOfDate2, Toast.LENGTH_SHORT).show();
postData.add(valuOfDate2);
}while (cur2.moveToNext());
}
}
}
catch(Exception e) {
Log.e("Error", "Error", e);
} finally {
if (db2 != null)
db2.close();
}
cur2.close();
Cursor cur3 = db3.rawQuery("SELECT "+pretestTable+"."+columnID+" AS _id,"+
" "+pretestTable+"."+DDL_Testing_Session+","+
" "+pretestTable+"."+Text_Reason+","+
" "+pretestTable+"."+Text_Howmany+","+
" "+pretestTable+"."+Text_Ques1+","+
" "+pretestTable+"."+RBL_Ques2a+","+
" "+pretestTable+"."+RBL_Ques2b+","+
" "+pretestTable+"."+RBL_Ques3+","+
" "+pretestTable+"."+TXT_Ques4+","+
" "+pretestTable+"."+RBL_Ques5+","+
" "+pretestTable+"."+RBL_Ques6+","+
" "+pretestTable+"."+RBL_Ques7+","+
" "+pretestTable+"."+RBL_Ques8+","+
" "+pretestTable+"."+RBL_Ques9+","+
" "+pretestTable+"."+DDL_Sick+","+
" "+pretestTable+"."+TXT_Ques11+","+
" "+pretestTable+"."+RBL_Ques12+","+
" "+pretestTable+"."+TXT_Ques13+","+
" "+pretestTable+"."+TXT_Ques14+","+
" "+pretestTable+"."+RBL_Ques15+","+
" "+pretestTable+"."+RBL_Ques16+","+
" "+pretestTable+"."+RBL_Ques17+","+
" "+pretestTable+"."+TXT_Ques18+","+
" "+pretestTable+"."+TXT_Vct+","+" WHERE " + pretestTable+"."+columnID+" ="+fingerTable+"."+Template+" AND "+pretestTable+"."+"pretest_id=?" , null);
//" "+pretestTable+"."+TXT_Vct+","+" WHERE" + pretestTable+"."+columnID+" ="+fingerTable+"."+Template+"AND" +"pretest_id=?" , null);
try {
db3 = this.openOrCreateDatabase("remoteid.db", MODE_PRIVATE, null);
if(cur3 != null )
{
if(cur3.moveToFirst())
{
do {
valueOfID = cur3.getString(cur3.getColumnIndex("PretestID"));
valuOfDate = cur3.getString(cur3.getColumnIndex("Date"));
textType = cur3.getString(cur3.getColumnIndex("txtVCT"));
valueOfDDLTS = cur3.getString(cur3.getColumnIndex("ddlTestingSession"));
valueOfReason = cur3.getString(cur3.getColumnIndex("txtReason"));
valueOfHowmany = cur3.getString(cur3.getColumnIndex("txthowmany"));
valueOftxtques1 = cur3.getString(cur3.getColumnIndex("txtques1"));
valueOfrblques2a = cur3.getString(cur3.getColumnIndex("rblques2a"));
valueOfrblques2b = cur3.getString(cur3.getColumnIndex("rblques2b"));
valueOfrblques3 = cur3.getString(cur3.getColumnIndex("rblques3"));
valueOftxtques4 = cur3.getString(cur3.getColumnIndex("txtques4"));
valueOfrblques5 = cur3.getString(cur3.getColumnIndex("rblques5"));
valueOfrblques6 = cur3.getString(cur3.getColumnIndex("rblques6"));
valueOfrblques7 = cur3.getString(cur3.getColumnIndex("rblques7"));
valueOfrblques8 = cur3.getString(cur3.getColumnIndex("rblques8"));
valueOfrblques9 = cur3.getString(cur3.getColumnIndex("rblques9"));
valueOfddlsick = cur3.getString(cur3.getColumnIndex("ddlsick"));
valueOftxtques11 = cur3.getString(cur3.getColumnIndex("txtques11"));
valueOfrblques12 = cur3.getString(cur3.getColumnIndex("rblques12"));
valueOftxtques13 = cur3.getString(cur3.getColumnIndex("txtques13"));
valueOftxtques14 = cur3.getString(cur3.getColumnIndex("txtques14"));
valueOfrblques15 = cur3.getString(cur3.getColumnIndex("rblques15"));
valueOfrblques16 = cur3.getString(cur3.getColumnIndex("rblques16"));
valueOfrblques17 = cur3.getString(cur3.getColumnIndex("rblques17"));
valueOftxtques18 = cur3.getString(cur3.getColumnIndex("txtques18"));
bundle.getString(valueOfID);
bundle.getString(valuOfDate);
bundle.getString(textType);
bundle.getString(valueOfDDLTS);
bundle.getString(valueOfReason);
bundle.getString(valueOfHowmany);
bundle.getString(valueOftxtques1);
bundle.getString(valueOfrblques2a);
bundle.getString(valueOfrblques2b);
bundle.getString(valueOfrblques3);
bundle.getString(valueOftxtques4);
bundle.getString(valueOfrblques5);
bundle.getString(valueOfrblques6);
bundle.getString(valueOfrblques7);
bundle.getString(valueOfrblques9);
bundle.getString(valueOfddlsick);
bundle.getString(valueOftxtques11);
bundle.getString(valueOfrblques12);
bundle.getString(valueOftxtques13);
bundle.getString(valueOftxtques14);
bundle.getString(valueOfrblques15);
bundle.getString(valueOfrblques16);
bundle.getString(valueOfrblques17);
bundle.getString(valueOftxtques18);
}while (cur3.moveToNext());
}
}
}
catch(Exception e) {
Log.e("Error", "Error", e);
} finally {
if (db3 != null)
db3.close();
}
cur3.close();
arrayadapter11 = new simpleefficientadapter(Screening.this,prtestData);
arrayadapter22 = new simpleefficientadapter(Screening.this,screeningData);
arrayadapter33 = new simpleefficientadapter(Screening.this,postData);
mylist1.setAdapter(arrayadapter11);
mylist1.setOnItemClickListener(this);
mylist2.setAdapter(arrayadapter22);
mylist2.setOnItemClickListener(this);
mylist3.setAdapter(arrayadapter33);
mylist3.setOnItemClickListener(this);
}
@Override
public void onItemClick(AdapterView<?> arg0, View arg1, int arg2, long arg3) {
Intent intent;
switch (arg0.getId()) {
case R.id.prescreenlist:
intent = new Intent(getApplicationContext(), NewScreening.class);
intent.putExtra("DateValue", valuOfDate);
intent.putExtra("TT", textType);
intent.putExtra("idValue", valueOfID);
intent.putExtra("ddltsValue", valueOfDDLTS);
intent.putExtra("reasonValue", valueOfReason);
intent.putExtra("howmanyValue", valueOfHowmany);
intent.putExtra("textqus1Value", valueOftxtques1);
intent.putExtra("textqus2aValue", valueOfrblques2a);
intent.putExtra("textqus2bValue", valueOfrblques2b);
intent.putExtra("rbqs3Value", valueOfrblques3);
intent.putExtra("rbqs4Value", valueOftxtques4);
intent.putExtra("rbqs5Value", valueOfrblques5);
intent.putExtra("rbqs6Value", valueOfrblques6);
intent.putExtra("rbqs7Value", valueOfrblques7);
intent.putExtra("rbqs8Value", valueOfrblques8);
intent.putExtra("rbqs9Value", valueOfrblques9);
intent.putExtra("ddlsValue", valueOfddlsick);
intent.putExtra("tq11Value", valueOftxtques11);
intent.putExtra("tq12Value", valueOfrblques12);
intent.putExtra("tq13Value", valueOftxtques13);
intent.putExtra("tq14Value", valueOftxtques14);
intent.putExtra("rbqs15Value", valueOfrblques15);
intent.putExtra("rbqs16Value", valueOfrblques16);
intent.putExtra("rbqs17Value", valueOfrblques17);
intent.putExtra("rbqs18Value", valueOftxtques18);
intent.putExtras(intent);
startActivity(intent);
setResult(RESULT_OK, intent);
break;
case R.id.screeninglist:
intent = new Intent(getApplicationContext(), SimpleScreening.class);
startActivity(intent);
break;
case R.id.postscreenlist:
intent = new Intent(getApplicationContext(), PostScreening.class);
startActivity(intent);
break;
}
}
}
A:
You've got an extra "," right before your WHERE in your statement. Also, as Herb pointed out you have no FROM in your select.
|
[
"math.stackexchange",
"0002425546.txt"
] | Q:
Finding a limit of a floor function.
Find the limit of:
$\lim_{x\rightarrow 0}\frac{x}{a}\cdot\lfloor\frac{b}{x}\rfloor$ ($a,b>0$)
Is the following solution correct?
$\frac{x}{a}(\frac{b}{x}-1)\leq \frac{x}{a}\cdot\lfloor\frac{b}{x}\rfloor\leq \frac{xb}{ax}$
$\lim_{x\rightarrow 0}\frac{x}{a}(\frac{b}{x}-1)=\frac{b}{a}$
$\lim_{x\rightarrow 0}\frac{xb}{ax}=\frac{b}{a}$
And using the squeeze theorem, I get
$\lim_{x\rightarrow 0}\frac{x}{a}\cdot\lfloor\frac{b}{x}\rfloor=\frac{b}{a}$
A:
Alternatively,
$$\frac xa\left\lfloor\frac bx\right\rfloor=\frac xa\frac bx-\frac xa\left\{\frac bx\right\}$$
where the braces denote the fractional part.
The first term obviously tends to $\dfrac ba$, while the second vanishes (the fractional part is bounded).
Intuitive explanation:
The parameter $a$ is inessential and WLOG $a=1$. Then with $x=10^{-k}$
$$10^{-k}\lfloor10^kb\rfloor$$
represents the number $b$ truncated to $k$ decimals. Hence this tends to $b$.
|
[
"stackoverflow",
"0018119998.txt"
] | Q:
What would be regular expression of finding all strings starts with $ in java
I am having one string containing "This is a time to get involve into $FO, $RTP, $DFG and $RG"
A:
Use following regular expression:
"\\$\\w+"
$ should be escaped.
\w match digits, alphabet, _.
If you need only match alphabets, use [a-zA-Z] instead.
A:
This will work too
String str="This is a time to get involve into $FO, $RTP, $DFG and $RG" ;
String[] arr=str.split(" ");
for (String i:arr){
if(i.indexOf("$")==0){
System.out.println(i.replaceAll("\\,",""));
}
}
|
[
"stackoverflow",
"0009385822.txt"
] | Q:
Automatic exporting of SSRS to PowerPoint via URL parameter using Aspose.Slides
Using Aspose.Slides (a product that allows exporting SSRS to PowerPoint), can I supply a URL parameter that automatically outputs as PPT? Native SSRS allows automatic output of PDF, Excel, etc. via the rs:Format parameter, like so:
http://localhost/ReportServer?/myFolder/myReport&rs:Command=Render&rs:Format=PDF
Is there a rs:Format= parameter that allows outputting as PowerPoint? I tried PPT and PowerPoint, but to no avail.
A:
Try to use ASPPT for PPT and ASPPTX for PPTX
|
[
"math.stackexchange",
"0001710183.txt"
] | Q:
Fastest way to find the area under a curve which is represented by a list of points?
For a piece of software I am writing I need to find the area under a curve that is collected as a list of points that make it up. I am trying to determine the fastest way to get the area.
The only option I see is to approximate a function that represents the list of points and integrate over it, though I was wondering if there were any methods that worked on the points directly? Hopefully being a bit faster and possible more exact?
About the data: The $Y$ values of the points will always be positive, and can possibly be wildly far apart in value. The $X$ values will almost never be evenly spaced and usually be some floating point number.
EDIT: Also. The data is first recorded as a list of points where the $X$ values are spaced evenly. This data is then calibrated based on some defined calibration function. This is what causes the $X$ values to change their spacing.
Knowing this function and the pre-calibration values, could there be some sort of combination of the Trapezoidal Rule and this calibration function to get an even more accurate area than simply using the Trapezoidal Rule on the calibrated data?
A:
It really depends on the accuracy and what you know about the function.
If you can assume your function is fairly smooth$^*$ between your sampled points, you can use the Trapezoidal Rule to quickly find the area. That is, if you have a bunch of points ${x_i, y_i}$, you would first sort them in increasing $x$ order and then write:
$$\text{Area} = \sum_i \frac{(x_{i+1}-x_i)(y_{i+1} + y_i)}{2}$$
$^*$ Here "smooth" means that the second derivative is small, or in other words, that you function is reasonably approximated by passing a series of straight lines through your points.
|
[
"stackoverflow",
"0004419510.txt"
] | Q:
UIImageWriteToSavedPhotosAlbum causes EXC_BAD_ACCESS
I develop a CGImage and it works fine when the program displays it on the screen using this:
[output_view.layer performSelectorOnMainThread:@selector(setContents:) withObject: (id) image waitUntilDone:YES];
Now, doing this crashes the application:
UIImageWriteToSavedPhotosAlbum([UIImage imageWithCGImage: image)],nil,nil,nil);
I don't understand it at all. Here's the stack during the crash:
#0 0x33b5db6e in memmove (Line 65)
#1 0x341ddee2 in CGAccessSessionGetBytes
#2 0x31ab4488 in alphaProviderGetBytes
#3 0x341ddf52 in CGAccessSessionGetBytes
#4 0x31abbc80 in writeOne
#5 0x31abbdae in _CGImagePluginWriteJPEG
#6 0x31ab2ddc in CGImageDestinationFinalize
#7 0x3037eda2 in imageDataFromImageWithFormatAndProperties
#8 0x3037effc in imageDataFromImageRef
#9 0x3038ea3c in __-[PLAssetsSaver _saveImage:imageData:properties:completionBlock:]_block_invoke_1
#10 0x33c32680 in _dispatch_call_block_and_release
#11 0x33c32ba0 in _dispatch_worker_thread2
#12 0x33bd7250 in _pthread_wqthread
Here is the method with the problem:
-(void)captureOutput: (AVCaptureOutput *) captureOutput didOutputSampleBuffer: (CMSampleBufferRef) sampleBuffer fromConnection: (AVCaptureConnection *) conenction{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
UInt8 * image_data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//Removed part here which modifies the CVPixelBuffer pixels with a particular algorithm.
[bottom_view setNeedsDisplay];
[bottom_view performSelectorOnMainThread:@selector(setBackgroundColor:) withObject: [UIColor colorWithRed:0 green:av/255.0 blue:0 alpha:1] waitUntilDone: YES];
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a Quartz image from the pixel data
CGDataProviderDirectCallbacks providerCallbacks = { 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };
CGDataProviderRef d_provider = CGDataProviderCreateDirect(image_data,pixels*4,&providerCallbacks);
CGImageRef image = CGImageCreate (width,height,8,32,bytesPerRow,colorSpace,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst,d_provider,NULL,true,kCGRenderingIntentDefault);
//Draw image
if (needs_photo) {
UIImageWriteToSavedPhotosAlbum([UIImage imageWithCGImage: image],nil,nil,nil);
needs_photo = NO;
}
if (recording) {
[writer_input appendSampleBuffer:sampleBuffer];
}
[output_view.layer performSelectorOnMainThread:@selector(setContents:) withObject: (id) image waitUntilDone:YES];
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
CGDataProviderRelease(d_provider);
[pool drain];
}
Thank you for any help with this problem.
A:
You can try this code instead of yours and see if it's working, maybe your input stream is faulty and not the save operation.
#import <OpenGLES/ES1/gl.h>
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
float width = 64;
float height = 64;
GLubyte *buffer = (GLubyte *) malloc(width * height * 4);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, (width * height * 4), NULL);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, 0);
UIImageWriteToSavedPhotosAlbum([UIImage imageWithCGImage:imageRef],nil,nil,nil);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
[pool drain];
so
if this is working, add parts of your original code little by little to see when it crashes.
|
[
"stackoverflow",
"0034748494.txt"
] | Q:
Google gRPC (C++) building under VisualStudio 2013. Link errors
I'm trying to setup gRPC for my project in Visual Studio. Did everything as described here: http://www.infopulse.com/blog/grpc-framework-by-google-tutorial/. (Trying to compile helloworld example -> Git grpc/examples/cpp/helloworld)
The main problem I'm getting, while compiling:
unresolved external symbol "void _cdecl grcp::FillMetadata
... and so on.
Nothing wrong with protobuf (Everything working)
OpenSSL, zlib - OK.
My Includes:
$(SolutionDir)..
$(SolutionDir)..\include
$(SolutionDir)..\third_party\protobuf\src
$(SolutionDir)\packages\grpc.dependencies.zlib.1.2.8.10\build\native\include
$(SolutionDir)\packages\grpc.dependencies.openssl.1.0.204.1\build\native\include
$(SolutionDir)\packages\gflags.2.1.2.1\build\native\include
$(SolutionDir)\packages\gtest.1.7.0.1\build\native\include
Additional dependencies:
libprotobuf.lib
grpc.lib
gpr.lib
libeay32MDd.lib
ssleay32MDd.lib
Everything in correct folders.
What am I missing here? Maybe some of you have an already working .sln project with all dependencies list? I know that the problem must be connected to some .lib that I'm missing here.
A:
It looks like you're not linking in the grpc++ code. It's unfortunately not a supported target right now, but we're looking to remedy that very soon. I'd really like to be offering a nuget package for C++ users.
If you want to try for now though, make sure you're compiling the vsprojects/vcxproj/grpc++/grpc++.vcxproj project alongside the rest of your code. Let us know how you go (and please file bugs at github.com/grpc/grpc/issues to help us prioritize things).
|
[
"stackoverflow",
"0063456602.txt"
] | Q:
How to filter log events and crashes in Firebase Crashlytics Console for user id?
How can I filter user log events and crashes in Firebase Console assuming i know user id set with FirebaseCrashlytics.getInstance().setUserId("12345");?
A:
You would just type in the User ID into the User ID searchbar in your dashboard. Here's a picture:
|
[
"stackoverflow",
"0038167177.txt"
] | Q:
Selectable and Un/Select all button (jQuery)
I am attempting to make un/select all buttons that will un/highlight all table rows when clicked. It is fairly straight forward to add the ui-selected class to the table, but not make them draggable. Here is my code that demonstrates the selectable/draggable/droppable functionality:
https://jsfiddle.net/Unfixed/s7mtbn26/3/
I currently only have this for the buttons.
$("#selectall").on('click', function(evt) {
$("tr.selectable").each(function() {
$(this).addClass("ui-selected");
});
evt.preventDefault();
});
$("#unselectall").on('click', function(evt) {
$("tr.selectable").each(function() {
$(this).removeClass("ui-selected");
});
evt.preventDefault();
});
How would I go about making these buttons/links select all of the tables and allow the draggable/droppable functionality to work? Would I have to break my current .selectable() chain into separate functions and use .on() to trigger selectable()?
Any help appreciated, thanks!
A:
Try this one:
$("#selectall").on('click', function(evt) {
$("tr.selectable").each(function() {
$(this).addClass("ui-selected");
});
draggables();
evt.preventDefault();
});
function draggables() {
$("tr.ui-droppable").draggable('destroy');
$("tr.ui-droppable").droppable("destroy");
$("tr.ui-selected").draggable({
helper: function() {
var c = $("tr.ui-selected").length;
var dom = [];
dom.push("<div style=\"border:2px solid black;width:50px;height:20px;line-height:25px;\">",
"<center>Files Selected: " + c + "</center></div>");
return $(dom.join(''));
},
revert: 'invalid',
appendTo: 'parent',
containment: '#filemanager',
axis: 'y',
cursor: '-moz-grabbing'
});
$("tr.droppable").droppable({
hoverClass: "ui-state-active"
});
}
See updated fiddle :
https://jsfiddle.net/ersamrow/s7mtbn26/6/
|
[
"computergraphics.stackexchange",
"0000001884.txt"
] | Q:
OpenGL vertex color
Why do I need to specify the same name for color input in fragment shader and output color from vertex shader?
//Vertex shader
out vec3 vertex_color;
void main()
{
vertex_color=vec3(1.0,0.0,0.0);
}
//Fragment shader
in vec3 vertex_color;
out vec4 frag_color;
void main()
{
frag_color=vec4(vertex_color,1.0);
}
Since we already hand the color value from vertex shader to fragment shader, why do we need same names?
A:
Using the same name is exactly how you tell OpenGL that you want the value passed through from vertex to fragment.
You say "we already hand the color value from vertex shader to fragment shader", but that's not correct. Usually, the only value that's passed between shaders automatically is position, and that's only because it feeds into the GPU's rasterization hardware to draw the triangle on the screen.
Any other values such as color, normal, texture coordinates, etc. that you want passed between shader stages have to be explicitly hooked up by the shader author. And the way you do that in GLSL is to create an out variable in one stage, and an in variable in the next stage, with the same name.
A:
Because you may want to pass more than one attribute through to the fragment shader. 2 which are essential are normal vector and the texture coordinates once you start doing lighting and textured meshes.
You can in newer openGL versions give a numbered location to the attributes you pass through using layout(location=1). Then the names don't have to match.
|
[
"law.stackexchange",
"0000003369.txt"
] | Q:
Can I keep my last name if I get married?
Can I keep my last name the same if I get married? I don't want to hyphenate it either. Is this possible? Is it a MUST that I change or hyphenate it?
I am in New York City, New York.
A:
You can keep your name; this is the default. Source:
Your surname does not change automatically upon marriage unless you elect to change it.
Nothing in the law requires you to change your name when getting married; it is your personal choice.
You are not required to have the same surname as your spouse.
|
[
"electronics.stackexchange",
"0000278068.txt"
] | Q:
Colpitts oscillator not oscillating
I have tried breadboarding a simple Colpitts oscillator, just to see how it works (and to get to use my 'scope for something more interesting than measuring static voltages).
I have been following this example, specifically the second circuit design:
A Colpitts oscillator http://www.play-hookey.com/oscillators/lc/images/colpitts_oscillator_cb.gif
I'm feeding it 5 V, the resistors are all at 1k, L is a .22 µH fixed inductor, Q is a 2N4401, C1 and C2 are .001 µF ceramics and the unlabeled cap at the base is a 220 pF ceramic (maybe way too low?), and I'm probing between emitter and ground.
Now, admittedly, these values are more or less randomly chosen from my component drawers. In this case I am not interested in obtaining a specific frequency as long as it's low enough for me to measure it (50 MHz), so I figured I could just throw in any values for the caps and the inductor, as long as they were high enough - I've read that this can actually be a pretty accurate method of measuring capacitance and inductance respectively, based on the frequency you get.
Questions:
Why is the circuit not oscillating? I'm measuring a DC voltage of 1.87 V at my probe point.
How do you calculate the proper resistor values (or ratios)?
What's the base cap used for? Just power decoupling?
Am I probing in the right place?
A:
My answer:
1.) I do not know because I didn`t recalculate the circuit. However, it is YOUR task to find a suitable design (not using random parts values), see point 2).
2.) At first, you must understand the circuit (why it can oscillate). There is a frequency-selective feedback network with a bandpass characteristic (L in parallel with C1 and C2). Do not overlook that the supply voltage is identical with signal ground.
Hence, at the midband frequency the phase shift will be zero. A part of this signal (depending on the C1-C2 ratio) is fed back to the emitter establishing the required positive feedback (loop gain).
3.) It is the task of the base capacitor to keep the base at signal ground (transistor in common base configuration and positive gain, see 2).
4.) The classical (normal) output for common base stages is at the collector .
|
[
"stackoverflow",
"0010710405.txt"
] | Q:
Insert in to two tables using php
I have looked around and Im still not sure on how to do this. I have dried several different ways and its obvious im still doing it wrong. Please can you help
I have an accounts table containing the account username and password and a separate contacts table which is liked to the accounts table by the username. I need to insert in to both of these. here is what I have so far.
Thanks
//signup.php
include 'connect.php';
echo '<h3>Sign up</h3>';
if($_SERVER['REQUEST_METHOD'] != 'POST')
{
/*the form hasn't been posted yet, display it
note that the action="" will cause the form to post to the same page it is on */
echo '<form action="" method="post">
<br>
<table width="0" border="0">
<tr>
<th align="left" scope="col">Name:</th>
<th scope="col"><input type="text" name="name"></th>
</tr>
<tr>
<th align="left" scope="row">Phone:</th>
<td><input type="text" name="phone"></td>
</tr>
<tr>
<th align="left" scope="row">Address</th>
<td><textarea name="address" rows="4"></textarea></td>
</tr>
<tr>
<th align="left" scope="row"><p>Postcode</p></th>
<th align="left" scope="row"><input type="text" name="postcode" id="postcode"></th>
</tr>
<tr>
<th align="left" scope="row">Email</th>
<td><input type="text" name="email"></td>
</tr>
<tr>
<th align="left" scope="row">Username</th>
<td><input type="type" name="username"></td>
</tr>
<tr>
<th align="left" scope="row">Password</th>
<td align="left"><input type="password" name="password"></td>
</tr>
<tr align="left">
<th colspan="2" scope="row"><input type="Submit"></th>
</tr>
</table>
</form>';
}
else
{
/* so, the form has been posted, we'll process the data in three steps:
1. Check the data
2. Let the user refill the wrong fields (if necessary)
3. Save the data
*/
$errors = array(); /* declare the array for later use */
if(isset($_POST['username']))
{
//the user name exists
if(!ctype_alnum($_POST['username']))
{
$errors[] = 'The username can only contain letters and digits.';
}
if(strlen($_POST['username']) > 30)
{
$errors[] = 'The username cannot be longer than 30 characters.';
}
}
else
{
$errors[] = 'The username field must not be empty.';
}
if(!empty($errors)) /*check for an empty array, if there are errors, they're in this array (note the ! operator)*/
{
echo 'Uh-oh.. a couple of fields are not filled in correctly..';
echo '<ul>';
foreach($errors as $key => $value) /* walk through the array so all the errors get displayed */
{
echo '<li>' . $value . '</li>'; /* this generates a nice error list */
}
echo '</ul>';
}
else
{
//the form has been posted without, so save it
//notice the use of mysql_real_escape_string, keep everything safe!
//also notice the sha1 function which hashes the password
$sql = "INSERT INTO
tbl_accounts(accounts_username, accounts_password, accounts_date)
VALUES('" . mysql_real_escape_string($_POST['username']) . "',
'" . sha1($_POST['password']) . "',
NOW())";
$sql2= "INSERT INTO
tbl_contacts(contacts_username, contacts_name, contacts_email, contacts_phone, contacts_address, contacts_postcode, contacts_date)
VALUES('" . mysql_real_escape_string($_POST['username']) . "',
'" . mysql_real_escape_string($_POST['name']) . "',
'" . mysql_real_escape_string($_POST['email']) . "',
'" . mysql_real_escape_string($_POST['phone']) . "',
'" . mysql_real_escape_string($_POST['address']) . "',
'" . mysql_real_escape_string($_POST['postcode']) . "',
NOW())";
$result = mysql_query($sql);
if(!$result)
$result = mysql_query($sql2);
if(!$result)
{
//something went wrong, display the error
echo 'Something went wrong while registering. Please try again later.';
//echo mysql_error(); //debugging purposes, uncomment when needed
}
else
{
echo 'Successfully registered. You can now <a href="signin.php">sign in</a> and start posting! :-)';
}
}
}
A:
Searching stackoverflow you'll find links like those below:
Stack 1
Stack 2
|
[
"askubuntu",
"0001160993.txt"
] | Q:
Remove ip addresses from lines
I have a file.txt containing subdomains and ip addresses:
input:
104.112.200.252 www2test.google.com
104.112.200.252 www.google.com
104.211.52.69 voice.google.com
104.211.52.69 voice.google.com
psfthrpreprd1.oci.google.com
voice2.google.com
psfthrpreprd3.oci.google.com
voice.google.com
psfthrpreprd4.oci.google.com
But I want only subdomains as output:
www2test.google.com
www.google.com
voice.google.com
voice.google.com
psfthrpreprd.oci.google.com
psfthrpreprd1.oci.google.com
voice2.google.com
psfthrpreprd3.oci.google.com
voice.google.com
psfthrpreprd4.oci.google.com
any suggestions thanks in advance ;)
A:
Use awk:
awk '{print $NF}' file.txt
|
[
"stackoverflow",
"0028269992.txt"
] | Q:
PFUser/PFInstallaion saveEventually - Missing argument for parameter #1 in call
I am using PFUser and PFInstallation with the saveEventually method. However, when I call the method I get an error that reads:
Missing argument for parameter #1 in call
Any idea what Xcode is asking for? Do I need to import any special header files? Here is my code:
PFUser.currentUser().saveEventually()
installation.saveEventually()
A:
I figured it out. I had to import BFTask.h in my bridging header.
#import <Bolts/BFTask.h>
|
[
"meta.stackexchange",
"0000007458.txt"
] | Q:
Why hasn't anyone won Fanatic yet?
I recently got W00T (Enthusiast) and am now looking to Fanatic. I saw on another post that SO has been open for 11 months but no one has won Fanatic yet. Is it broken or just a coincidence?
A:
The Fanatic badge was just unveiled and started tracking around 6/26/09. It'll be September before anybody gets it.
A:
Today's the day.
Front page is stacked with them.
alt text http://img30.imageshack.us/img30/4804/fanaticbadge.png
A:
Enthusiast and Fanatic was recently added badges. And the time period started after they have been added.
|
[
"stackoverflow",
"0024641824.txt"
] | Q:
Piping to More Than One Location - PowerScript
I want to do something like this-
"Error array cleared." | Out-File $ErrorLog $InfoLog -Append
However it's not working. Is this possible without writing another line to output it to the other file?
A:
One way is with a short function like this:
function Out-FileMulti {
param(
[String[]] $filePath
)
process {
$text = $_
$filePath | foreach-object {
$text | out-file $_ -append
}
}
}
Example:
"Out-FileMultiTest" | Out-FileMulti "test1.log","test2.log"
(Writes the string "Out-FileMultiTest" to both test1.log and test2.log)
|
[
"stackoverflow",
"0053827709.txt"
] | Q:
Access files within resource directory in maven generated jar
I've done my research, but I just really can't get it to work.
I'm using Spring Boot with Maven. Thymeleaf and jquery for my frontend.
My directory:
project-system/
├── mvnw
├── mvnw.cmd
├── pom.xml
├── src
│ ├── main
│ │ ├── java
│ │ │ └── rigor
│ │ │ └── io
│ │ │ └── projectsystem
│ │ │ ├── rush
│ │ │ │ ├── TemporaryController.java
│ │ │ └── ProjectSystemApplication.java
│ │ └── resources
│ │ ├── application.properties
│ │ ├── MOCK_DATA.json
│ │ ├── static
│ │ └── templates
│ │ ├── index.html
│ │ └── records.html
Inside TemporaryController, I'm doing this operation:
list = new ObjectMapper().readValue(new File("./src/main/resources/MOCK_DATA.json"), new TypeReference<List<POJO>>() {});
So with this, I'm able to access the MOCK_DATA.json under the resources directory.
But now, I want to package it into a jar. This is proving a bit troublesome for me. Here's my build in my pom file:
<build>
<plugins>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>validate</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/classes/src/main</outputDirectory>
<includeEmptyDirs>true</includeEmptyDirs>
<resources>
<resource>
<directory>${basedir}/src/main/</directory>
<filtering>false</filtering>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<classpathPrefix>lib/</classpathPrefix>
<mainClass>rigor.io.projectsystem.ProjectSystemApplication</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
And the resulting target directory this gives me is:
├── target
│ ├── classes
│ │ ├── application.properties
│ │ ├── MOCK_DATA.json
│ │ ├── rigor
│ │ │ └── io
│ │ │ └── projectsystem
│ │ │ ├── rush
│ │ │ │ ├── TemporaryController.class
│ │ │ └── ProjectSystemApplication.class
│ │ ├── src
│ │ │ └── main
│ │ │ ├── java
│ │ │ │ └── rigor
│ │ │ │ └── io
│ │ │ │ └── projectsystem
│ │ │ │ ├── rush
│ │ │ │ │ ├── TemporaryController.java
│ │ │ │ └── ProjectSystemApplication.java
│ │ │ └── resources
│ │ │ ├── application.properties
│ │ │ ├── MOCK_DATA.json
│ │ │ ├── static
│ │ │ └── templates
│ │ │ ├── index.html
│ │ │ └── records.html
│ │ └── templates
│ │ ├── index.html
│ │ └── records.html
│ ├── generated-sources
│ │ └── annotations
│ ├── generated-test-sources
│ │ └── test-annotations
│ ├── maven-archiver
│ │ └── pom.properties
│ ├── maven-status
│ │ └── maven-compiler-plugin
│ │ ├── compile
│ │ │ └── default-compile
│ │ │ ├── createdFiles.lst
│ │ │ └── inputFiles.lst
│ │ └── testCompile
│ │ └── default-testCompile
│ │ ├── createdFiles.lst
│ │ └── inputFiles.lst
│ ├── surefire-reports
│ │ ├── rigor.io.projectsystem.ProjectSystemApplicationTests.txt
│ │ └── TEST-rigor.io.projectsystem.ProjectSystemApplicationTests.xml
│ ├── test-classes
│ │ └── rigor
│ │ └── io
│ │ └── projectsystem
│ │ ├── ProjectSystemApplicationTests$1.class
│ │ └── ProjectSystemApplicationTests.class
│ ├── project-system-0.0.1-SNAPSHOT.jar
│ └── project-system-0.0.1-SNAPSHOT.jar.original
As you can see, there are some redundancies that are happening, and when I try to run the jar file, it gives me a java.io.FileNotFoundException: ./src/main/resourfces/MOCK_DATA.json (No such file or directory)
A:
You cannot treat internal jar resource as a regular file, you need to ask classloader to do it for you, and the path would be relative to top level of the jar (so you want just to ask classloader to load "MOCK_DATA.json" without any paths), or to /resources folder. This is how to do it:
Classpath resource within jar
BTW. /src/main/resources folder is automatic in maven unless you need to configure filtering etc other then defaults :)
|
[
"codereview.stackexchange",
"0000046361.txt"
] | Q:
Different factorial algorithm implementations and measuring their execution time
I'm new to C, and as an exercise I'm building 4 different factorial algorithm implementations and measuring their running time. I'm looking for this feedback, especially:
The implementation of the factorial algorithms. Can the they be improved? Is there something you would do different? Is something wrong?
The implementation of the execution time measuring system. Is there a different/better way to measure function running time? Would you do something differently? Is something wrong?
But I'm also interested in anything else you see wrong in the program.
The program works from the command line:
./factorial <factorial to be calculated> <number of repetitions>
The program then prints the results to the console.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
// =========================================================================
// Functions
// =========================================================================
// Using for loop
double iterative_for_factorial(double n) {
double acc = 1;
for (n = n; n > 0; n--) acc *= n;
return acc;
}
// Using while loop
double iterative_while_factorial(double n) {
double acc = 1;
while (n > 0) {
acc *= n;
n--;
}
return acc;
}
// Using recursion
double recursive_factorial(double n) {
if (n < 1) return 1;
return n * recursive_factorial(n-1);
}
// Using recursion and ternary operator
double recursive_ternary_factorial(double n) {
return n < 1 ? 1 : n * recursive_ternary_factorial(n - 1);
}
// =========================================================================
// Timing
// =========================================================================
// Measures the CPU time of a function executed x times
double timeIt(double n, double times, double(*f)(double)) {
clock_t start = clock();
for (double i = 0; i < times; i++) {
f(n);
}
return (clock() - start);
}
// =========================================================================
// Main()
// =========================================================================
int main(int argc, char *argv[]) {
if (argc != 3) {
printf("\nusage: factorial <factorial to calculate> <repetitions>\n\n");
return 1;
}
double number = atof(argv[1]);
double times = atof(argv[2]);
double iterative_for_time = timeIt(number, times, iterative_for_factorial) / 1000000.0;
double iterative_while_time = timeIt(number, times, iterative_while_factorial) / 1000000.0;
double recursive_time = timeIt(number, times, recursive_factorial) / 1000000.0;
double recursive_ternary_time = timeIt(number, times, recursive_ternary_factorial) / 1000000.0;
double iterative_for_average = iterative_for_time / times;
double iterative_while_average = iterative_while_time / times;
double recursive_average = recursive_time / times;
double recursive_ternary_average = recursive_ternary_time / times;
printf("\n");
printf("==============================================\n");
printf("==============================================\n\n");
printf("Factorial Algorithm\n");
printf("Factorial calculated: %f\n", number);
printf("Number of times: %f\n", times);
printf("(all results are in seconds) \n");
printf("----------------------------------------------\n");
printf("\n");
printf("Total time:\n");
printf("iterative_for_factorial: %f\n", iterative_for_time);
printf("iterative_while_factorial: %f\n", iterative_while_time);
printf("recursive_factorial: %f\n", recursive_time);
printf("recursive_ternary_factorial: %f\n", recursive_ternary_time);
printf("\n");
printf("Average time:\n");
printf("iterative_for_factorial: %f\n", iterative_for_average);
printf("iterative_while_factorial: %f\n", iterative_while_average);
printf("recursive_factorial: %f\n", recursive_average);
printf("recursive_ternary_factorial: %f\n", recursive_ternary_average);
printf("\n");
printf("==============================================\n");
printf("==============================================\n\n");
return 0;
}
A:
I find it odd that the input to the factorial functions is a double rather than an int. There's no need to consider non-integral values of n, nor is there a need to allow large n. In fact, n! will start losing precision at around n = 19, since an IEEE double can only store numbers up to about 9 × 1016 accurately. At n = 171, the result overflows completely.
|
[
"stackoverflow",
"0015657773.txt"
] | Q:
Score field in html text acts as string when trying to = -= values
Im trying to make a score board, and its adding on numbers weirdly:
My html:
<div id="game-info">
Top score: <p id="top-score">0</p><br>
Current score: <p id="current">0</p><br>
Games played: <p id="played-games">0</p>
</div>
My javascript:
var score = document.getElementById("current");
if(blabla scored points){
score.innerHTML += 100;
}
if(blabla scored -points){
score.innerHTML -= 10;
}
The minus points work fine-ish, at least it adds up negatively, but the positive score will add itself to the end on the current score, like so:
Current score: <p id="current">0100</p><br>
or
Current score: <p id="current">-20100</p><br>
Does this have anything to do with that its a string and not an int? Im confused why the negative score works and the positive doesnt when its the same markup..
A:
Your concatenating a string ... you need to convert the current score to a number first - try something like this :
var score = document.getElementById("current");
if(blabla scored points){
// parse current score as integer and then add 100
score.innerHTML = parseInt(score.innerHTML,10) + 100;
}
if(blabla scored -points){
// parse current score as integer and then subtract 10
score.innerHTML = parseInt(score.innerHTML,10) - 10;
}
parseInt() parses a string to an integer
Extra note: when using parseInt() and using a radix
An integer that represents the radix of the above mentioned string. Always specify this parameter to eliminate reader confusion and to guarantee predictable behavior. Different implementations produce different results when a radix is not specified.
|
[
"stackoverflow",
"0013031386.txt"
] | Q:
Value of un-initialized enum in C++ MFC
If i have a class that contains an enum member and that member is not initialized with any data.
I want to check if some value has been placed (sort of validation mechanism I am making for the class to validate all members has been initialized), what can I compare the enum member to? NULL? Or it receives 0 (0 is like the first field so that would not be good for anyone..)
A:
You can't compare an un-initialized variable with anything, because it's undefined behavior to read it.
So your safest bet is to keep an UNSET state as part of the enum, initialize it to this state, and compare it with that.
|
[
"tex.stackexchange",
"0000091678.txt"
] | Q:
Self-replicating (La)TeX document
Since TeX and LaTeX can print out any text, it should be possible to write a self-replicating document, i.e., a document that is typeset as a PDF/DVI of itself. Have you seen something like that?
A:
A long time ago, in a country far far away, under the influence of Hoefstader's Godel, Escher, Bach, I spent a merry few minutes playing with programs that would print out themselves. One goal was to make a minimal such program in a particular language, another was to have a general scheme that could be added to make any program (in that language) do this (in addition to what the program was supposed to do). In pursuit of that latter goal I figured out some general ingredients that could be used to do this. These were:
The ability to convert from an integer to a character.
The ability to make decisions.
The ability to iterate over a list.
With these, the scheme is as follows. Create a list containing the code converted into some integer representation of the symbols it contains. Insert into that list a special character (usually 0 is a safe bet) at a particular point. Then the program iterates through the list. Its normal behaviour is to convert each integer into the character it represents and output that. However, when it encounters 0 it simply outputs the list.
Here's a TeX version of that:
\tt
\parindent0pt
\emergencystretch3em
\def\A{92, 116, 116, 10, 92, 112, 97, 114, 105, 110, 100, 101, 110, 116, 48, 112, 116,
10, 92, 101, 109, 101, 114, 103, 101, 110, 99, 121, 115, 116, 114, 101, 116, 99, 104,
51, 101, 109, 10, 92, 100, 101, 102, 92, 65, 123, 0, 125, 10, 92, 108, 111, 110, 103, 92,
100, 101, 102, 92, 84, 35, 49, 44, 123, 37, 10, 92, 105, 102, 110, 117, 109, 35, 49, 60,
48, 92, 114, 101, 108, 97, 120, 10, 92, 101, 108, 115, 101, 10, 92, 105, 102, 110, 117,
109, 35, 49, 62, 48, 92, 114, 101, 108, 97, 120, 10, 92, 105, 102, 110, 117, 109, 35, 49,
61, 49, 48, 92, 114, 101, 108, 97, 120, 10, 92, 112, 97, 114, 10, 92, 101, 108, 115, 101,
10, 92, 99, 104, 97, 114, 35, 49, 10, 92, 102, 105, 10, 92, 101, 108, 115, 101, 10, 92,
65, 10, 92, 102, 105, 10, 92, 101, 120, 112, 97, 110, 100, 97, 102, 116, 101, 114, 92,
84, 92, 102, 105, 125, 10, 92, 101, 120, 112, 97, 110, 100, 97, 102, 116, 101, 114, 92,
84, 92, 65, 92, 98, 121, 101, -1, }
\long\def\T#1,{%
\ifnum#1<0\relax
\else
\ifnum#1>0\relax
\ifnum#1=10\relax
\par
\else
\char#1
\fi
\else
\A
\fi
\expandafter\T\fi}
\expandafter\T\A\bye
This produces:
A:
Save as quine.tex and compile with tex (or pdftex for PDF output):
\def\T{
\tt \hsize 32.5em\parindent 0pt\def \S {\def \S ##1>{}}\S \string
\def \string \T \string {\par \expandafter \S \meaning \T \string
}\par \expandafter \S \meaning \T \footline {} \end }
\tt \hsize 32.5em\parindent 0pt\def \S {\def \S ##1>{}}\S \string
\def \string \T \string {\par \expandafter \S \meaning \T \string
}\par \expandafter \S \meaning \T \footline {} \end
It is due to Péter Szabó and has been published on TUGboat, vol. 29 (2008), p. 207 as part of the TeX Pearls section at EuroBachoTeX 2007.
Here's the output:
A:
Here is a simple example:
\documentclass{article}
\pagestyle{empty}
\usepackage{listings}
\begin{document}
\lstinputlisting{\jobname}
\end{document}
The result looks as the original:
But if you want to be able to copy from the PDF, you must use this code:
\documentclass{article}
\pagestyle{empty}
\usepackage{listings}
\lstset{basicstyle=\ttfamily,flexiblecolumns=true}
\begin{document}
\lstinputlisting{\jobname}
\end{document}
The result:
|
[
"stackoverflow",
"0021416438.txt"
] | Q:
Convert JSON to Thrift Object in Nodejs
I have the following thrift file:
union D{ 1: string s; }
struct B{ 1: required D d; }
struct C{ 1: required D d; }
union A{ 1: B b; 2: C c; }
service Test { void store(1: A a) }
And I have the following JSON object, which was obtained by parsing a string.
var data_json = {
'b': {
'd': {
's': "hello"
}
}
};
I'm trying to write a thrift client in Nodejs which calls the store method with data_json as its argument, but I get the following error when I do so:
/home/aakash/Documents/thrift0/gen-nodejs/test_types.js:225
this.b.write(output);
^
TypeError: Object #<Object> has no method 'write'
at Object.A.write (/home/aakash/Documents/thrift0/gen-nodejs/test_types.js:225:12)
at Object.Test_store_args.write (/home/aakash/Documents/thrift0/gen-nodejs/Test.js:57:12)
at Object.TestClient.send_store (/home/aakash/Documents/thrift0/gen-nodejs/Test.js:113:8)
at Object.TestClient.store (/home/aakash/Documents/thrift0/gen-nodejs/Test.js:105:8)
at Object.<anonymous> (/home/aakash/Documents/thrift0/client.js:40:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
However, it works fine when I pass the following object as argument:
var data_thrift = new ttypes.A({
'b': new ttypes.B({
'd': new ttypes.D({
's': "hello"
})
})
});
Is there a way to pass data_json directly to store, or a way to convert data_json to data_thrift ?
A:
I'm trying to write a thrift client in Nodejs which calls the store
method with data_json as its argument, but I get the following error
when I do so:
/home/aakash/Documents/thrift0/gen-nodejs/test_types.js:225
this.b.write(output);
Is there a way to pass data_json directly to store, or a way to
convert data_json to data_thrift ?
TL;DR
Sure. Just provide a write() method for each of your Objects that implements what is generated by Thrift otherwise, or provide an alternative serialization mechanism for your data to produce the exact same output.
Explanation
The code generated by Thrift and the Thrift library provide the necessary infrastructure for using Thrift and to ensure the interoperability of both Thrift RPC calls and serialized data. To achieve this, the serialized data follow a certain predefined structure, which is determined by the protocol you use. All the infrastructure of an RPC and serialization framework, including the generacted code for your data, is just there to provide the means to do this.
If your application holds data in another way internally (which is perfectly ok) you will have to convert them back and forth. The only way to store the data directly would be to generate the exact same JSON output that the Thrift infrastructure would produce for your data, othwerwise the other side will not be able to deserialize your JSON.
But technically, following that path you will end up reinventing the wheel.
|
[
"stackoverflow",
"0016130116.txt"
] | Q:
See the date last accessed a directory php
I was wondering if it where possible to see, with PHP, when the last time a folder was accessed. I was thinking about using 'touch()' in php but that's more for a file, isn't it?
Thanks in advance!
A:
As far as I know this information is only stored about files (According to others this is wrong and it is for directories - see Dinesh's answer). However you can iterate over each file in a directory and discover the most recently accessed file in the directory (Not exactly what you want but possibly as close as you will get). Using the DirectoryIterator:
<?php
$iterator = new DirectoryIterator(dirname(__FILE__));
$accessed = 0;
foreach ($iterator as $fileinfo) {
if ($fileinfo->isFile()) {
if ($fileinfo->getATime() > $accessed) {
$accessed = $fileinfo->getAtime();
}
}
}
print($accessed);
?>
http://php.net/manual/en/directoryiterator.getatime.php
A:
You can use fileatime(), which works for both files and directories:
fileatime('dir');
|
[
"stackoverflow",
"0042893720.txt"
] | Q:
Groovy command using inheritance cannot compile in Spring Boot remote shell
I have an abstract groovy class with some utility methods that I want to extend from other groovy command classes (for use in Spring Boot's remote shell). However, when I attempt to run the groovy command class, I get a CommandException.
My groovy abstract class looks like the following.
package commands
import com.xyz.MyService
import org.crsh.command.InvocationContext
import org.springframework.beans.factory.BeanFactory
abstract class abstractcmd {
private static final String SPRING_FACTORY = "spring.beanfactory"
protected MyService getMyService(InvocationContext context) {
return getBeanFactory(context).getBean(MyService.class);
}
private BeanFactory getBeanFactory(InvocationContext context) {
return context.attributes[SPRING_FACTORY];
}
}
My groovy command class looks like the following.
package commands
import org.crsh.cli.Command
import org.crsh.cli.Usage
import org.crsh.command.InvocationContext
@Usage("do something commands")
class foo extends abstractcmd {
@Command
@Usage("bar")
def String bar(InvocationContext context) {
try {
getMyService(context).bar()
return "did bar"
} catch (Exception e) {
return String.format("could not do bar: %s", e.toString())
}
}
}
When I SSH into the shell and execute foo bar I get the following exception.
org.crsh.shell.impl.command.spi.CommandException: Could not create command foo instance
at org.crsh.lang.impl.groovy.GroovyCompiler$1.getCommand(GroovyCompiler.java:192) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.LanguageCommandResolver.resolveCommand(LanguageCommandResolver.java:101) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.command.CRaSH.getCommand(CRaSH.java:100) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.command.CRaSHSession.getCommand(CRaSHSession.java:96) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.impl.script.PipeLineFactory.create(PipeLineFactory.java:89) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.impl.script.ScriptRepl.eval(ScriptRepl.java:88) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.command.CRaSHSession.createProcess(CRaSHSession.java:163) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.async.AsyncProcess.execute(AsyncProcess.java:172) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.Console.iterate(Console.java:219) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.Console.on(Console.java:158) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.Console.on(Console.java:135) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.jline.JLineProcessor.run(JLineProcessor.java:204) ~[crash.shell-1.3.2.jar:?]
at org.crsh.ssh.term.CRaSHCommand.run(CRaSHCommand.java:99) ~[crash.connectors.ssh-1.3.2.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
If I remove the package commands line from both the abstractcmd and foo classes, then the IDE (IntelliJ) and shell both complain about abstractcmd not being found.
2017-03-19 19:34:06 ERROR org.crsh.shell.impl.command.CRaSHProcess.execute:84 - Error while evaluating request 'foo' Could not compile command script foo
org.crsh.shell.impl.command.spi.CommandException: Could not compile command script foo
at org.crsh.lang.impl.groovy.GroovyClassFactory.parse(GroovyClassFactory.java:65) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.impl.groovy.GroovyCompiler$1.getCommand(GroovyCompiler.java:172) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.LanguageCommandResolver.resolveCommand(LanguageCommandResolver.java:101) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.command.CRaSH.getCommand(CRaSH.java:100) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.command.CRaSHSession.getCommand(CRaSHSession.java:96) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.impl.script.PipeLineFactory.create(PipeLineFactory.java:89) ~[crash.shell-1.3.2.jar:?]
at org.crsh.lang.impl.script.ScriptRepl.eval(ScriptRepl.java:88) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.command.CRaSHSession.createProcess(CRaSHSession.java:163) ~[crash.shell-1.3.2.jar:?]
at org.crsh.shell.impl.async.AsyncProcess.execute(AsyncProcess.java:172) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.Console.iterate(Console.java:219) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.Console.on(Console.java:158) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.Console.on(Console.java:135) ~[crash.shell-1.3.2.jar:?]
at org.crsh.console.jline.JLineProcessor.run(JLineProcessor.java:204) ~[crash.shell-1.3.2.jar:?]
at org.crsh.ssh.term.CRaSHCommand.run(CRaSHCommand.java:99) ~[crash.connectors.ssh-1.3.2.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
foo: 9: unable to resolve class abstractcmd
@ line 9, column 1.
@Usage("bar")
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310) ~[groovy-2.4.7.jar:2.4.7]
at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:946) ~[groovy-2.4.7.jar:2.4.7]
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:593) ~[groovy-2.4.7.jar:2.4.7]
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:542) ~[groovy-2.4.7.jar:2.4.7]
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298) ~[groovy-2.4.7.jar:2.4.7]
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268) ~[groovy-2.4.7.jar:2.4.7]
at org.crsh.lang.impl.groovy.GroovyClassFactory.parse(GroovyClassFactory.java:59) ~[crash.shell-1.3.2.jar:?]
... 14 more
The only way that I am able to get a custom command to work properly if is I don't extend abstractcmd (thereby implement the "util" methods in the command class) and remove the package line.
I am using Spring Boot v1.5.1.RELEASE.
Any ideas on what I'm doing wrong?
A:
Looks like CRaSH has issues with groovy classes extending other groovy classes, because if you don't extend one, it works just fine.
I'm sorry but I did not have time to fully investigate how and why (corrections/updates are more than welcome), and didn't find anything in the docs but the good news is that you can extend an abstract java class to achieve the same thing. I stumbled upon such a sample for the cron command that extends a GroovyCommand abstract class with the following javadoc:
/**
* The base command for Groovy class based commands.
*/
public abstract class GroovyCommand extends BaseCommand implements GroovyObject
So, to cut a long story short, you could use the following workaround:
Abstract SpringAwareCommand java class
package crash.commands;
import org.crsh.command.BaseCommand;
import org.springframework.beans.factory.BeanFactory;
public abstract class SpringAwareCommand extends BaseCommand {
private static final String SPRING_BEAN_FACTORY = "spring.beanfactory";
protected <T> T getBean(Class<T> beanClass) {
return ((BeanFactory) this.context.getAttributes().get(SPRING_BEAN_FACTORY)).getBean(beanClass);
}
}
Updated foo groovy class
package crash.commands
import com.xyz.MyService
import org.crsh.cli.Command
import org.crsh.cli.Usage
@Usage("do something commands")
class foo extends SpringAwareCommand {
@Command
@Usage("bar")
def String bar() {
try {
getBean(MyService.class).bar()
return "did bar"
} catch (Exception e) {
return String.format("could not do bar: %s", e.toString())
}
}
}
|
[
"serverfault",
"0000911802.txt"
] | Q:
migrate stopped mariadb server to new one
Hi I'm running a freenas system where I have two jails. One jail is broken in a way that I cannot start the mariadb server anymore due to userland and kernel version mismatch. So I created a new jail where I installed the mariadb server. Now I want to migrate the old mariadb instance with all the data and settings to the new one.
All instructions I can find refer to a running source instance where the data is dumped and then moved to the new instance.
How can I migrate the data with a stopped mariadb server?
I still have shell access to the jail that is the source server.
A:
Ok I copied everything fro var/db/mysql on the old server to var/db/mysql on the new server and changed the rights like already mentioned.
Everything works fine again.
|
[
"stackoverflow",
"0004003584.txt"
] | Q:
More elegant way to check for duplicates in C++ array?
I wrote this code in C++ as part of a uni task where I need to ensure that there are no duplicates within an array:
// Check for duplicate numbers in user inputted data
int i; // Need to declare i here so that it can be accessed by the 'inner' loop that starts on line 21
for(i = 0;i < 6; i++) { // Check each other number in the array
for(int j = i; j < 6; j++) { // Check the rest of the numbers
if(j != i) { // Makes sure don't check number against itself
if(userNumbers[i] == userNumbers[j]) {
b = true;
}
}
if(b == true) { // If there is a duplicate, change that particular number
cout << "Please re-enter number " << i + 1 << ". Duplicate numbers are not allowed:" << endl;
cin >> userNumbers[i];
}
} // Comparison loop
b = false; // Reset the boolean after each number entered has been checked
} // Main check loop
It works perfectly, but I'd like to know if there is a more elegant or efficient way to check.
A:
You could sort the array in O(nlog(n)), then simply look until the next number. That is substantially faster than your O(n^2) existing algorithm. The code is also a lot cleaner. Your code also doesn't ensure no duplicates were inserted when they were re-entered. You need to prevent duplicates from existing in the first place.
std::sort(userNumbers.begin(), userNumbers.end());
for(int i = 0; i < userNumbers.size() - 1; i++) {
if (userNumbers[i] == userNumbers[i + 1]) {
userNumbers.erase(userNumbers.begin() + i);
i--;
}
}
I also second the reccomendation to use a std::set - no duplicates there.
A:
The following solution is based on sorting the numbers and then removing the duplicates:
#include <algorithm>
int main()
{
int userNumbers[6];
// ...
int* end = userNumbers + 6;
std::sort(userNumbers, end);
bool containsDuplicates = (std::unique(userNumbers, end) != end);
}
A:
Indeed, the fastest and as far I can see most elegant method is as advised above:
std::vector<int> tUserNumbers;
// ...
std::set<int> tSet(tUserNumbers.begin(), tUserNumbers.end());
std::vector<int>(tSet.begin(), tSet.end()).swap(tUserNumbers);
It is O(n log n). This however does not make it, if the ordering of the numbers in the input array needs to be kept... In this case I did:
std::set<int> tTmp;
std::vector<int>::iterator tNewEnd =
std::remove_if(tUserNumbers.begin(), tUserNumbers.end(),
[&tTmp] (int pNumber) -> bool {
return (!tTmp.insert(pNumber).second);
});
tUserNumbers.erase(tNewEnd, tUserNumbers.end());
which is still O(n log n) and keeps the original ordering of elements in tUserNumbers.
Cheers,
Paul
|
[
"or.stackexchange",
"0000001197.txt"
] | Q:
How to get GAMS's solvers to work from Pyomo?
I want to run a model written in Pyomo language with CPLEX solver of GAMS.
However I get the following error:
"No 'gams' command found on system PATH - GAMS shell"
NameError: No 'gams' command found on system PATH - GAMS shell solver
functionality is not available. "
I have added the folder (C:\GAMS\win64\25.1) to environmental variables of system.
Pyomo version is 5.6.5. Python is 3.5.2. and GAMS is 25.1.2
I would be grateful if you could provide me with help.
solvername = 'gams'
opt = SolverFactory(solvername)
results = opt.solve(
instance, solver='cplex', keepfiles=True, tee=True)
A:
The syntax that you are using in your Pyomo code is correct and you should be able to access the GAMS solvers once Pyomo is able to find GAMS.
As the error mentions, the GAMS command is not found in the system PATH. I would double check if the GAMS path is correctly added to the system PATH. A way of doing so is opening a command prompt (since you are using Windows cmd.exe or the PowerShell) and writing gams. It should show your license details as follows
PS C:\Users\debernal> gams
--- Job ? Start 08/08
/19 03:32:16 25.1.0 r944b73f WEX-WEI x86 64bit/MS Windows
*** GAMS Base Module 27.2.0 r944b73f Released May 23, 2019 WEI x86 64bit/MS Window
***
*** GAMS Development Corporation
...
The Pyomo-GAMS interface is able to access both the GAMSShell solver (through the environment variable) and the GAMSDirect solver (which is GAMS Python API). To install the last one you need to run the code python setup.py install in the directory C:\GAMS\win64\25.1\apifiles\Python\api_36\ (adjust if you use different versions of Python/GAMS than the ones mentioned by the OP). More instructions can be found here.
|
[
"stackoverflow",
"0054477738.txt"
] | Q:
Get collection from settings.json
I create some config file in json format:
{
"SomeCollection" : [
{
"Val1" : "Some string",
"Val2" : "Some string2"
}
]
}
I create object with this config:
IConfiguration config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.Build();
But how can I get list of pairs Val1 and Val2?
A:
Ok folks. After a few experiments, I get a solution!
IConfiguration config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.Build();
foreach( var en in config.GetSection("SomeCollection").GetChildren())
{
string Val1= en["Val1"];
string Val2= en["Val2"];
}
Thank for trying help!
|
[
"ru.stackoverflow",
"0000713629.txt"
] | Q:
Как выйти из php.exe в консоли
Вопрос может показаться глупым, но я выполнил в командной строке команду
php
Теперь как мне выйти из него? Пробую вводить команды, не реагирует.
P.S возможности закрыть консоль нет
A:
используйте Ctrl+D для выхода из режима php
|
[
"stackoverflow",
"0059424179.txt"
] | Q:
How can I calculate avg of data from django form and store in variable for later use?
In the readerpage function, in my views.py, I am trying to calculate the avg of the two variables: readability_rating and actionability_rating, and store the result in avg_rating
def readerpage(request, content_id):
content = get_object_or_404(Content, pk=content_id)
form = ReviewForm(request.POST)
if form.is_valid():
review = form.save(commit=False)
review.content = content
readability_rating = form.cleaned_data['readability_rating']
readability = form.cleaned_data['readability']
actionability_rating = form.cleaned_data['actionability_rating']
actionability = form.cleaned_data['actionability']
general_comments = form.cleaned_data['general_comments']
review.avg_rating = (float(readability_rating) +
float(actionability_rating)) / 2
review.save()
return redirect('home')
args = {'content': content, 'form': form}
return render(request, 'content/readerpage.html', args)
The problem is that with this setup the two variables are still ChoiceFields - as such the above setup gives me the error:
float() argument must be a string or a number, not 'ChoiceField'
I’ve tried converting them to floats without any luck.
I also attempted using the TypedChoiceField with coerce=float, still with no luck
I’m not sure whether the best place to calculate this is in my function, my form, or my model?
models.py:
class Review(models.Model):
content = models.ForeignKey(Content, null=True, on_delete=models.CASCADE)
readability = models.CharField(null=True, max_length=500)
readability_rating = models.IntegerField(null=True)
actionability = models.CharField(null=True, max_length=500)
actionability_rating = models.IntegerField(null=True)
general_comments = models.CharField(null=True, max_length=500)
avg_rating = models.FloatField(null=True)
def _str_(self):
return self.title
forms.py:
class ReviewForm(forms.ModelForm):
readability = forms.CharField(widget=forms.Textarea)
readability_rating = forms.ChoiceField(
choices=[(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)])
actionability = forms.CharField(widget=forms.Textarea)
actionability_rating = forms.ChoiceField(
choices=[(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)])
general_comments = forms.CharField(widget=forms.Textarea)
class Meta:
model = Review
fields = ['readability', 'readability_rating',
'actionability', 'actionability_rating', 'general_comments']
Thanks for reading this.
A:
The variables are ChoiceFields because you are declaring them as ChoiceFields in view function. Shouldn't you just fetch the values from your cleaned_data?
readability_rating = form.cleaned_data['readability_rating']
And to the second part of your question: Why not add it as a @property to your model?
|
[
"stackoverflow",
"0060233713.txt"
] | Q:
Azure API retrieving SAS policy, error InvalidHostName
I am trying to create an Event Hub in Azure using the REST APi ( with Postman ) but I am getting an error in the process of generating the SAS token.
curl --location --request POST 'https://login.microsoftonline.com/0e3603bd-2f0b-43e2-b9b5-5d456791cf33/oauth2/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=client_credentials' \
--data-urlencode 'client_id=myclientid' \
--data-urlencode 'client_secret=myclientsecret' \
--data-urlencode 'resource=https://management.azure.com/'
I store the bearer token and use it to authentificate on the following requeste:
First create the Event Hub namespace :
'''
curl --location --request PUT 'https://management.azure.com/subscriptions/6fa11037-363b-4ff4-a5a2-f4e93efa527c/resourceGroups/easypeasybi/providers/Microsoft.EventHub/namespaces/easypeasybi?api-version=2017-04-01' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mybearertoken' \
--data-raw '{"location":"francecentral"}'
'''
Now in order to create the Event Hub, I need to build a SAS token,therefore I need to retrieve the SAS policy first:
The creation of the Namespace generate a SAS policy named RootManageSharedAccessKey, I can list all the policy using this call :
curl --location --request GET 'https://management.azure.com/subscriptions/6fa11037-363b-4ff4-a5a2-f4e93efa527c/resourceGroups/easypeasybi/providers/Microsoft.EventHub/namespaces/easypeasybi/AuthorizationRules?api-version=2017-04-01' \
--header 'Authorization: Bearer mybearertoken'
Lastly, I am trying to retrieve the RootManageSharedAccessKey policy but I am getting an error
{
"error": {
"code": "InvalidHostName",
"message": "The provided host name 'easypeasybi.servicebus.windows.net' is not whitelisted. "
}
}
The code I am using is the following
curl --location --request POST 'https://management.azure.com/subscriptions/6fa11037-363b-4ff4-a5a2-f4e93efa527c/resourceGroups/easypeasybi/providers/Microsoft.EventHub/namespaces/easypeasybi/AuthorizationRules/RootManageSharedAccessKey/listKeys?api-version=2017-04-01' \
--header 'Content-Type: application/atom+xml;type=entry;charset=utf-8' \
--header 'Host: easypeasybi.servicebus.windows.net' \
--header 'Authorization: Bearer mybearertoken' \
--data-raw '<entry xmlns='\''http://www.w3.org/2005/Atom'\''>
<content type='\''application/xml'\''>
<EventHubDescription xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect">
</EventHubDescription>
</content>
</entry> '
A:
In the header, the Host should be management.azure.com.
So in you command, change this line of code
--header 'Host: easypeasybi.servicebus.windows.net' \
to
--header 'Host: management.azure.com' \
|
[
"stackoverflow",
"0010720891.txt"
] | Q:
is this synchronization object implementation thread safe?
I have am reviewing a colleague's Visual Studio 2008 C++03 application application and I've come across an implementation of a thread synchronization primitive (below).
Assuming SyncObject is implemented correctly, is the use of a boolean in the below code to know if the resource is locked or unlocked thread-safe? If no, can you walk through a "ThreadA" does this and "ThreadB" does that situation so I understand your logic?
class CMyLock
{
public:
CMyLock(SyncObject* object)
: object_(object), acquired_(false)
{
};
// return true if the resource is locked within the given timeout period
bool Lock(DWORD dwTimeOut = INFINITE)
{
acquired_ = object_->Lock(dwTimeOut);
return acquired_;
};
// return true if the resource is unlocked
bool Unlock()
{
if (acquired_)
acquired_ = !object_->Unlock();
return !acquired_;
};
// return true if the resource is locked
bool IsLocked() { return acquired_; };
private:
bool acquired_;
// some thread synchronization primitive
SyncObject* object_;
};
A:
It is not threadsafe.
directly after m_pObject->Unlock() returns, another thread waiting on m_pObject->Lock(dwTimeOut) can return and set m_bAcquired to true, then the unlocking thread sets m_bAcquired to false and overwrites the locked state incorrectly (IsLocked will return false while the object is locked).
|
[
"stackoverflow",
"0022286604.txt"
] | Q:
Find digital roots of a number
This is an ACM problem in order to finding the roots of an integer number.
Here is the problem text: http://acm.zju.edu.cn/onlinejudge/showProblem.do?problemId=115
This is my code, but when I submit the code, I get wrong answer. In other side, I've check this code with numbers of integers and I've get the correct answer.
#include <iostream>
using namespace std;
int main() {
unsigned long long cc = 0;
cin >> cc;
while (cc != 0) {
unsigned long long sum = 0;
while (cc > 0) {
sum += cc % 10;
cc = cc / 10;
if (cc == 0 && sum > 9) { cc = sum; sum = 0; }
}
cout << sum;
cin >> cc;
cout << endl;
}
}
Can you please help me?!
Thank you.
A:
The problem is that the input integer is larger than what would fit in an unsigned long long.
Therefore, you need to read the number as a string, and then calculate the digit sum from the string.
The following code will work:
#include <iostream>
#include <string>
using namespace std;
int main()
{
string inStr;
while(cin >> inStr && inStr != "0")
{
unsigned long long cc = 0;
for(string::const_iterator it = inStr.begin(); it!=inStr.end(); ++it)
{
cc += *it - '0';
}
unsigned long long sum = 0;
do
{
while (cc)
{
sum += cc % 10;
cc = cc / 10;
}
cc = sum;
sum = 0;
}while(cc > 9);
cout << cc << endl;
}
return 0;
}
|
[
"es.stackoverflow",
"0000315300.txt"
] | Q:
¿Cómo devolver/renderizar varios diccionarios en Django/Python?
Tengo estos dos (o más) diccionarios:
def Index(request):
nav={
"nav":{
"Servicio": "servicio",
"Detalles": "servicio",
"Precios": "productos",
"Contacto": "asesoramiento"
}
}
content={
"content":{
"contenido 1": "descripción 1",
"contenido 2": "descripción 2",
"contenido 3": "descripción 3",
"contenido 4": "descripción 4"
}
}
content.update(nav)
return render(request, "index.html", nav)
Lo que quiero saber es cómo renderizarlo en Django/Python, ya que para renderizar un solo diccionario estaba utilizando:
return render(request, "index.html", nav)
Pero con dos diccionarios no tengo idea cómo es, intente de varias formas pero sin éxito.
Como verán, debe ser un problema de lo mas simple pero recién estoy aprendiendo Django/Python y algunas cosas se me complican. Desde ya muchas gracias!
Actualización:
Utilizando content.update(nav) o nav.update(content) el codigo compila correctamente y puedo hacer un for del nav, sin embargo, no recibo datos de content. El código en html lo tengo así:
{% for title, desc in content.items %}
<div class="col2">
<div>
<h2>{{title}}</h2>
</div>
<p>{{desc}}</p>
</div>
{% endfor %}
Tengo exactamente la misma estructura de llamada de datos en el nav y funciona correctamente.
A:
Puedes hacer un update a cualquiera de los dos diccionarios, de la siguiente manera:
content.update(nav)
De esta manera unira los 2 diccionarios y podras renderisarlo sin problema y su estructura se vera algo asi:
{
'content': {
'contenido 1': 'descripción 1',
'contenido 2': 'descripción 2',
'contenido 3': 'descripción 3',
'contenido 4': 'descripción 4'
},
'nav': {
'Servicio': 'servicio',
'Detalles': 'servicio',
'Precios': 'productos',
'Contacto': 'asesoramiento'
}
}
Tambien puedes crear un diccionario "padre" por asi decirlo y a dicho diccionario hacerle el update de los 2 diccionarios.
Si quieres hacerlo por separado me temo que es imposible, pero bien puede haber alguna manera de hacerlo de la cual desconosco totalmente, pero dudo que la haya.
Actualizasion:
De acuerdo al problema que planteas, deberia funcionar, deberia iterar sin ningun problema, de hecho lo estoy probando y funciona.
Si quieres iterar al nav o al content, recuerda que debe ser por separado.
|
[
"unix.stackexchange",
"0000229234.txt"
] | Q:
Join multiple sed commands in one script for processing CSV file
Having a CSV file like this:
HEADER
"first, column"|"second "some random quotes" column"|"third ol' column"
FOOTER
and looking for result like:
HEADER
first, column|second "some random quotes" column|third ol' column
in other words removing "FOOTER", quotes in beginning, end and around |.
So far this code works:
sed '/FOOTER/d' csv > csv1 | #remove FOOTER
sed 's/^\"//' csv1 > csv2 | #remove quote at the beginning
sed 's/\"$//' csv2 > csv3 | #remove quote at the end
sed 's/\"|\"/|/g' csv3 > csv4 #remove quotes around pipe
As you see the problem is it creates 4 extra files.
Here is another solution, that has a goal not to create extra files and to do the same thing in a single script. It doesn't work very well.
#!/bin/ksh
sed '/begin/, /end/ {
/FOOTER/d
s/^\"//
s/\"$//
s/\"|\"/|/g
}' csv > csv4
A:
First of all, as Michael showed, you can just combine all of these into a single command:
sed '/^FOOTER/d; s/^\"//; s/\"$//; s/\"|\"/|/g' csv > csv1
I think some sed implementations can't cope with that and might need:
sed -e '/^FOOTER/d' -e 's/^\"//' -e 's/\"$//' -e 's/\"|\"/|/g' csv > csv1
That said, it looks like your fields are defined by | and you just want to remove " around the entire field, leaving those that are within the field. In that case, you could do:
$ sed '/FOOTER/d; s/\(^\||\)"/\1/g; s/"\($\||\)/\1/g' csv
HEADER
first, column|second "some random quotes" column|third ol' column
Or, with GNU sed:
sed -r '/FOOTER/d; s/(^|\|)"/\1/g; s/"($|\|)/\1/g' csv
You could also use Perl:
$ perl -F"|" -lane 'next if /FOOTER/; s/^"|"$// for @F; print @F' csv
HEADER
first, column|second some random quotes column|third ol' column
A:
This would also work:
sed 's/^"//; s/"|"/|/g; s/""$/"/'
Example:
$ echo '"this"|" and "ths""|" and "|" this 2"|" also "this", "thi", "and th""' |
sed 's/^"//; s/"|"/|/g; s/""$/"/'
this| and "ths"| and | this 2| also "this", "thi", "and th"
pretty version
sed '
s/^"//
s/"|"/|/g
s/""$/"/
$d
'
|
[
"stackoverflow",
"0057894053.txt"
] | Q:
Can't trigger Firebase pub/sub function from HTTPS function
I've created two Firebase functions - one is an HTTPS where I am publishing a message to a topic, and a pub/sub function where I am responding to messages published to that topic.
testPubSub.ts
import { pubsub } from "firebase-functions";
export const testPubSub = pubsub
.topic("high-scores")
.onPublish(async (message, context) => {
console.log("hit test pubsub");
return null;
});
testHttps.ts
import { https } from "firebase-functions";
import { messaging } from "firebase-admin";
export const testHookEndpoint = https.onRequest(async (request, response) => {
const payload = {
notification: {
title: "Test title",
body: "test body"
}
};
const pubsubResponse = await messaging().sendToTopic("high-scores", payload);
console.log("response from pubsub", pubsubResponse);
response.send("success");
});
The HTTPS function appears to be running fine (200 response) and messaging is returning a message ID in the response, however I am not seeing the pub/sub function run in the Firebase Console.
When I look at GCP Console I see that "high-scores" has registered as a topic in the Pub/Sub tab, and I'm able to trigger other pub/sub functions in the project through Google Cloud Scheduler.
I'm not sure what step I'm missing for this.
A:
messaging().sendToTopic("high-scores", payload) is using Firebase Cloud Messaging to send a message to mobile applications subscribed to the given topic. This is completely different than Cloud Pubsub messaging. These two products don't actually have anything in common - FCM is for mobile apps and pubsub is for servers.
What you'll need to do instead is use the node pubsub SDK to send the message to your pubsub topic.
|
[
"math.stackexchange",
"0002972491.txt"
] | Q:
Is $\int_{\sin x}^{\cos x}x\, dx$ not a well-defined integral?
Consider the integral
$$\int_a^bx\, dx$$
where $a=\sin x$, and $b=\cos x$.
How can we evaluate this particular integral, if $a$ and $b$ are both functions of $x$, which is the variable with respect to which we are integrating?
A:
The variable inside the integral is a "dummy" in that it could be replaced by any other symbol. I think you could interpret $$\int_{\sin x}^{\cos x} x \, dx$$ as a sloppy way of writing, but having the same meaning as, $$\int_{\sin x}^{\cos x} y \, dy.$$
A:
Here is another interpretation.
Formally if $b\geq a$ then $\int_a^bxdx$ is a notation for $\int_{-\infty}^{\infty}\mathbf1_{(a,b]}(x)xdx$.
Applying that here leads to: $$\int_{\sin x}^{\cos x}xdx=\int_{-\infty}^{\infty}\mathbf1_{(\sin x,\cos x]}(x)xdx$$
I do not dare to say that this is the correct way of interpreting, but it illustrates at least that your question is a good question.
Personally I go for the interpretation of Umberto.
|
[
"serverfault",
"0000579906.txt"
] | Q:
Choosing Ubuntu distro for Apache server
I'm gonna host Apache server on Azure VM. For that i got choice of selecting one of following Ubuntu distros. What should i choose, Which one is more stable ?
Ubuntu Server 14.04 LTS
Ubuntu Server 13.10
Ubuntu Server 12.10
Ubuntu Server 12.04 LTS
A:
In first, it in depends on quality of your system source like CPU, RAM, ... .
In addition, If you have time about 2 month, base on the term LTS (Long Time Support) i suggest you wait for Ubuntu server 14.04, else Ubuntu server 12.04 can be the best choice for you.
|
[
"stackoverflow",
"0015933146.txt"
] | Q:
Is Java foreach loop an overkill for repeated execution
I agree foreach loop reduces typing and good for readability.
A little backup, I work on low latency application development and receive 1Million packets to process per second. Iterating through a million packets and sending this information across to its listeners. I was using foreach loop to iterate through the set of listeners.
Doing profiling i figured there are a lot of Iterator objects created to execute foreach loop. Converting foreach loop to index based foreach I observed a huge drop in the number of objects created there by reducing no. of GC's and increasing application throughput.
Edit: (Sorry for confusion, making this Q more clearer)
For example i have list of listeners(fixed size) and i loop through this forloop a million times a second. Is foreach an overkill in java?
Example:
for(String s:listOfListeners)
{
// logic
}
compared to
for (int i=0;i<listOfListeners.size();i++)
{
// logic
}
Profiled screenshot for the code
for (int cnt = 0; cnt < 1_000_000; cnt++)
{
for (String string : list_of_listeners)
{
//No Code here
}
}
A:
EDIT: Answering the vastly different question of:
For example i have list of listeners(fixed size) and i loop through this forloop a million times a second. Is foreach an overkill in java?
That depends - does your profiling actually show that the extra allocations are significant? The Java allocator and garbage collector can do a lot of work per second.
To put it another way, your steps should be:
Set performance goals alongside your functional requirements
Write the simplest code you can to achieve your functional requirements
Measure whether that code meets the functional requirements
If it doesn't:
Profile to work out where to optimize
Make a change
Run the tests again to see whether they make a significant difference in your meaningful metrics (number of objects allocated probably isn't a meaningful metric; number of listeners you can handle probably is)
Go back to step 3.
Maybe in your case, the enhanced for loop is significant. I wouldn't assume that it is though - nor would I assume that the creation of a million objects per second is significant. I would measure the meaningful metrics before and after... and make sure you have concrete performance goals before you do anything else, as otherwise you won't know when to stop micro-optimizing.
Size of list is around a million objects streaming in.
So you're creating one iterator object, but you're executing your loop body a million times.
Doing profiling i figured there are a lot of Iterator objects created to execute foreach loop.
Nope? Only a single iterator object should be created. As per the JLS:
The enhanced for statement is equivalent to a basic for statement of the form:
for (I #i = Expression.iterator(); #i.hasNext(); ) {
VariableModifiersopt TargetType Identifier =
(TargetType) #i.next();
Statement
}
As you can see, that calls the iterator() method once, and then calls hasNext() and next() on it on each iteration.
Do you think that extra object allocation will actually hurt your performance significantly?
How much do you value readability over performance? I take the approach of using the enhanced for loop wherever it helps readability, until it proves to be a performance problem - and my personal experience is that it's never hurt performance significantly in anything I've written. That's not to say that would be true for all applications, but the default position should be to only use the less readable code after proving it will improve things significantly.
A:
The "foreach" loop creates just one Iterator object, while the second loop creates none. If you are executing many, many separate loops that execute just a few times each, then yes, "foreach" may be unnecessarily expensive. Otherwise, this is micro-optimizing.
|
[
"pt.stackoverflow",
"0000245700.txt"
] | Q:
Usando Programação Funcional LISP, responda aos exercícios?
Suponha que foram definidos:
(defun xxx (x) (+ 1 x)) (setf xxx 5)
Qual o valor das seguintes expressões?
(xxx 2)
(xxx (+ (xxx 5) 3))
(+ 4 xxx)
(xxx xxx)
A:
Isso daqui define xxx como uma função que soma 1 a xxx:
(defun xxx (x) (+ 1 x))
Isso daqui define xxx como tendo o valor 5:
(setf xxx 5)
O LISP mantém valores e funções separados. Ou seja, você tem uma variável xxx com o valor 5 e uma função xxx que soma mais um.
Quando você faz isso:
(print (xxx 2))
Você está chamando a função xxx e passando-lhe 2 como parâmetro. O resultado é 3.
Com isso:
(print (xxx (+ (xxx 5) 3)))
Você está chamando a função xxx e passando-lhe 5 como parâmetro, resultando em 6. Depois soma 3, que dá 9. Chama a função xxx de novo passando o 9, e dá 10.
Já nisso:
(print (+ 4 xxx))
O xxx é o número 5. Somando com 4 dá 9.
Finalmente, isso:
(print (xxx xxx))
Você chama a função xxx com o valor do xxx (que é 5). Logo, isso resulta em 6.
Veja aqui funcionando no rextester.
|
[
"stackoverflow",
"0032486978.txt"
] | Q:
Synchronous cross-domain ajax DELETE request on unload
I am working with cross-domain remote resources that require locking. CORs headers are set appropriately.
I am trying to solve the case where the resource is not released by the client (remains locked until the lock expires) when the browser window is closed.
I had hoped to send a synchronous DELETE request on window unload. I am using jquery (answer can be plain javascript if necessary... mentioning jquery for context) and noticed their docs say "Cross-domain requests ... do not support synchronous operation" and I became very sad.
Is it possible to make a synchronous cross-domain ajax request? Is the jquery limitation due to older browsers? Everything I've read indicates the unload event listener will not be around long enough for the ajax call to complete if it is async and suggests using a synchronous request for this type of cleanup. Unfortunately the call is cross-domain... what can I do?
EDIT
So I am curious if I am getting lucky during local development (i.e. client on 127.0.0.1:8080 and api on 127.0.0.1:8081) or the jquery docs are just misleading. Will the following end up causing me problems down the road?
This appears to be working in Chrome45:
var unload_event = "unload." + lock.id
function release_lock(sync) {
$.ajax({
method: "DELETE",
async: !sync,
data: lock,
url: lock.url,
error: function(){
console.log("failed to release lock " + JSON.stringify(lock));
},
success: function(){
console.log("lock " + lock.id + " released");
lock = null;
$(window).off(unload_event);
}
});
}
$(window).on(unload_event, function(){
release_lock(true);
});
It does generate the following warning in the console:
Synchronous XMLHttpRequest on the main thread is deprecated because of
its detrimental effects to the end user's experience.
For more help, check http://xhr.spec.whatwg.org/.
A:
I would avoid doing this in the unload event due to the fact that synchronous ajax is the only way that will work, and synchronous ajax requests are deprecated in some modern browsers.
Alternatives include:
keepalive requests
This would involve periodically sending a request to the server indicating that the user is still editing the resource. The downside to this technique is that the resource will remain locked until the timeout happens, so if you're keepalive is set to an interval of 1 minute with a 3 minute lock timeout, it will remain locked for up to 3 minutes after the user has left the page. Additionally, if the user loses network connection for 3 minutes or longer, it will also become unlocked.
websockets
This would create an open connection between the client and the server, and while this connection is open, you can keep the resource locked. As soon as the client disconnects, you can assume that the client has closed the page and unlock it. The downside here is if the client loses network connection, it will also become unlocked.
|
[
"stackoverflow",
"0055172069.txt"
] | Q:
Django filter "less than" datetime not working correctly
I am trying to filter a Django queryset by timestamp (less than a certain value). However, the filter seems to be letting through records that are NOT less than the specified timestamp. Here is an example function:
def check_jobs_status_3():
current_time = datetime.utcnow()
time_threshold = current_time - timedelta(seconds=60)
print("$$$$$$$$$$$$ current_time = {}, timedelta = {}, time_threshold = {}".format(current_time,timedelta(seconds=60),time_threshold))
stuck_jobs_qs = Job.objects.filter(last_updated__lt=time_threshold)
for stuck_job in stuck_jobs_qs:
print("############################## Job #{} (last_updated = {}) no encoding status after {} seconds. Re-enqueueing.".format(stuck_job.id,stuck_job.last_updated,get_video_encoding_timeout_seconds()))
Here is the output:
$$$$$$$$$$$$ current_time = 2019-03-14 20:54:15.221554, timedelta = 0:01:00, time_threshold = 2019-03-14 20:53:15.221554
############################## Job #20 (last_updated = 2019-03-14 20:54:15.221264+00:00) no encoding status after 60 seconds. Re-enqueueing.
As you can see, I am receiving a record with last_updated set to 2019-03-14 20:54:15, which is NOT less than the filter value of 2019-03-14 20:53:15
Here is the definition of the last_updated field:
last_updated = models.DateTimeField(auto_now=True)
What could the problem be?
A:
Use Django's timezone-aware method django.utils.timezone.now.
from django.utils import timezone
# ...
def check_jobs_status_3():
current_time = timezone.now() # change this
time_threshold = current_time - timedelta(seconds=60)
print("$$$$$$$$$$$$ current_time = {}, timedelta = {}, time_threshold = {}".format(current_time,timedelta(seconds=60),time_threshold))
stuck_jobs_qs = Job.objects.filter(last_updated__lt=time_threshold)
for stuck_job in stuck_jobs_qs:
print("############################## Job #{} (last_updated = {}) no encoding status after {} seconds. Re-enqueueing.".format(stuck_job.id,stuck_job.last_updated,get_video_encoding_timeout_seconds()))
|
[
"stackoverflow",
"0059850098.txt"
] | Q:
How to get elements with max values from list of values in R
Hi I have a list of values and I only need elements with highest rating i.e.5.
so for below list I only need movie names which has 5 rating. I dont need movies with rating 2 and 3.
How do I do this in R.
$Movie1
[1] 5
$Movie2
[1] 5
$Movie3
[1] 2
$Movie4
[1] 5
$Movie5
[1] 5
$Movie6
[1] 4
I need result as
Movie1 Movie2 Movie4 Movie6
5 5 5 5
A:
The first step is going to be to unlist your list, then index the movie names for those that have the maximum ratings, then store them in a dataframe. Since you didn't really provide aa use case, it is hard to know exactly why you want the format you do, but here is something to get you started I hope.
my_list <- list(movie1 = 5, # a mock movie list
movie2 = 5,
movie3 = 2,
movie4 = 5,
movie5 = 4)
new <- do.call(rbind, my_list) # unlisting the elements into a df
t(
data.frame(
movie = row.names(new)[new == max(new)], # here we store row names of max rows
num = max(new)) # here we provide the max rating
) # finally, transpose `t()`
Hopefully this helps. Alternatively, you could do this to get the literal output you want from a similar list:
my_list <- list(movie1 = 5, # a mock movie list
movie2 = 5,
movie3 = 2,
movie4 = 5,
movie5 = 4)
new <- do.call(rbind, my_list) # unlisting the elements into a df
movie_names <- row.names(new)[new == max(new)] # store all the movie names that have max rating
rep_num <- max(new) # store max rating (to pass to rep and eventually table
len <- length(movie_names) # len to pass to seq_len
table(sapply(seq_len(len), function(i){rep(movie_names[i], rep_num)})) # then finally create a function to repeat the movie name for the max rating and create a table from it.
|
[
"stackoverflow",
"0046775654.txt"
] | Q:
How to Integration Test Spring Shell v2.x
The new Spring Shell docs don't seem to provide any examples of how to integration test CLI commands in a Spring Boot context. Any pointers or examples would be appreciated.
A:
The method Shell#evaluate() has been made public and has its very specific responsibility (evaluate just one command) for exactly that purpose. Please create an issue with the project if you feel like we should provide more (A documentation chapter about testing definitely needs to be written)
|
[
"stackoverflow",
"0054602566.txt"
] | Q:
Can i generate a .exe with a WPF application?
Is possible to generate a .exe from another .exe? using visual studio and net framework ? for example the application grab a .dll and convert into a standalone .exe app
A:
See: How to combine DLLs with .exe inside of a wpf / winforms application (with pictures)
It explains what you are looking for. Works for both wpf and winforms.
|
[
"stackoverflow",
"0011295530.txt"
] | Q:
How to Post an image To Facebook Wall using New Fbconnect Api?
I successfully posted an image along with text to a Facebook Wall using the old Fbconnect Api. But Now The Old Fbconnect Api doesn't work for me. Here is my code that I'm using. Download
I integrated the new Fbconnect api from a different tutorial but I cannot get it to work. I have three problems with this sample code.
How to integrate New Fbconnect Api .
How to post an image along With text On user's Facebook Wall as I am using in this sample code.
How to post an image along With text On specific Friend's Facebook Wall.
But I request to all who want to help me to please use my code to make all Changes.
Any help will be appriated .Thanx in advance.
A:
You can use the grapAPI of FB to post images on FB wall This is the easiest way to dol this Use this link to implement it.
Happy Coding :)
|
[
"serverfault",
"0000269536.txt"
] | Q:
Building a SAN with a MD3000(i)
I've been by here as a lurker a few times in the past, and have found it nothing but helpful, now I have a question of my own.
I'm charged with creating a VM cluster solution and have been looking into the MD3000(i) series DAS/iSCSI storage. I currently have 2 PowerEdge 1950s that I can hook up to a MD3000 via PERC5 SAS HBAs. However, and this is the tricky part, I want to create a Clustered or High Availability Disk that is accessible over the network.
One way I can see doing this is to divide up the MD3000 into a few LUNS, use one to create a clustered VM and then Connect another LUN as a pass-through disk to that VM, which can then "share" that disk via an iSCSI target. However I do see a few pitfalls in here, if the VM is Active/Passive I only get the benefit of using 1 HBA to handle IO. Additionally I am weary of performance overhead that may be introduced by using a VM to Manage the SAN Disk.
Are these concerns Justified? Can the VM even successfully fail over and still communicate with the Pass through disk?
Another option that seems far simpler is to just pick up a MD3000i instead and just set it up as an iSCSI target using my 1950s to manage it. The only reason I am think of alternatives is because I am concerned that the 1 Gigabit Ports on this unit will create a bottleneck.
I realize that if I'm looking for a super high performance SAN solution then probably the MD3000 series isn't the way to go but I am looking for an reasonably priced solution to cluster 5-6 Low/medium Utilization VMs (around 60iops each, ~90% writes).
I don't mind "out of box" thinking to come up with a solution but I do need to be able to support more original thinking with documentation.
Thanks in advanced for any thoughts.
A:
I've reread your 3rd paragraph several times but I'm still confused by it so I won't comment on that part.
Dell used to sell a PowerEdge 1950/MD3000/optionalMD1000/optionalMD1000 as a NAS bundle with Microsoft Storage Server installed on the 1950. You could easily recreate that config with your existing 1950 and MD3000 by running the now freely available Microsoft iSCSI Target. Personally, I think the Microsoft iSCSI Target stuff is handy for labs but in a production environment relying on the stability of Windows to serve my storage makes me uneasy. I ran a couple of these systems and they were ok. Obviously you could use the same hardware and run any OS and your favorite iSCSI target or NFS gateway.
The MD3000i iSCSI option works too. I have a few of these. For the load you're talking about, they would be more than adequate. The MD3000i really couldn't be any easier to manage.
If you have some of this hardware already, it's certainly very viable still. If you don't, not that Dell itself isn't selling the MD3000i anymore - there's a new line that does similar stuff.
|
[
"physics.stackexchange",
"0000561903.txt"
] | Q:
Why is pressure over the wing lesser than the pressure on the bottom?
Why is the flow above the wing faster than the lower one?
Most people say it's because the pressure above is lesser than the bottom one
But for the pressure to be low... The velocity must be high.So its like the chicken and egg question for me. Can't understand which one is the cause and which one is the effect.
A:
The answer to this is "it's complicated." Indeed NASA has a 4 page trail devoted to the many oversimplifications, such as the assumption that the air flowing over the top must move faster to "catch up" to the air moving across the lower side of the wing.
There are certainly two major aspects of this lift. The first is that air flowing laminarly over a curved surface will exert a lower pressure on the surface than it would if it were flat. This accounts for some of the lift. There is also what NASA calls the "skipping stone" argument, which notes that the air leaving an airfoil is directed downward.
In reality, neither model fully captures the effect of air flowing over a wing. To do that, we need to simply integrate the Euler equations (or Navier Stokes, in materials where we cannot ignore viscosity) to get the full description of lift. However, in a pinch, one can recognize that those two models both describe part of the story, like two blind men describing the leg and the trunk of the elephant in the famous story.
|
[
"stackoverflow",
"0050455054.txt"
] | Q:
MongoError: Unknown modifier $pushAll when I try to push an userId into an Array field in Node.js
I'm getting a MongoError: Unknown modifier $pushAl error when I try to push an userId to an Array of likes field:
A:
$pushAll has been deprecated since version 2.4:
Deprecated since version 2.4: Use the $push operator with $each instead.
You can/should upgrade to a more recent version of Mongoose that is compatible with the MongoDB version you are using or downgrade MongoDB to an older version that still supports $pushAll. I would advise the former.
Alternatively, you could use the usePushEach: true option when defining your schema, if you are using Mongoose version < 5:
new mongoose.Schema({ ... }, {
usePushEach: true,
});
You can read more on that issue on GitHub:
vkarpov15 commented on 26 Sep 2017
Well that's one problem, MongoDB 3.5 is an unstable dev release and should not be used. $pushAll has been deprecated for a long time so perhaps they got rid of it in 3.5. @mbroadst can you clarify?
We added a usePushEach option to work around this a while back: #4455, that should be a workaround for this issue:
new Schema({ arr: [String] }, { usePushEach: true });
Support for usePushEach was dropped on Mongoose version >= 5, so in that case, you could use Array.prototype.concat() instead, so your old code:
selectedHotel.likes.push(userId);
Would turn into:
selectedHotel.likes = selectedHotel.likes.concat([userId]);
Note that I'm assigning the value returned by concat back to selectedHotel.likes, as concat does not mutate the original Array.
|
[
"tex.stackexchange",
"0000113862.txt"
] | Q:
Problem with margins and tables
Here is a picture of my problem :
As you can see, the table isn't well centered at all. I'd like the distance from the left margin to be equal to the distance from the right margin.
Here is my latex code:
\begin{table*}[h!] \centering
\caption{Cette table indique pour chaque méthode si elles sont statistiquement meilleures qu'un classeur ne prédisant que la classe dominante pour la base de données "Nombre d'enfants".}
\begin{tabular}{|c|c|c|c|c|c|c|} % 7 colonnes
\hline
\textbf{Méthodes} & LapRLS & LapRKLS & LRreglog & Autolog & BagOfPath & RCTK % premiere colonne
\\ \hline
\textbf{>=0.5501} & 5 & 0 & 5 & 5 & 0 & 5
\\ \hline\hline
SVM & SVMmoran & SVMgeary & LogisticReg & Logmoran & Loggeary & MultiVarLog
\\ \hline
5 & 5 & 5 & 0 & 0 & 4 & 1
\\ \hline
\end{tabular}
\end{table*}
Does anyone know how could I solve this problem ?
A:
It usually gives a more consistent appearance if you choose a defined document font size such as \footnotesize rather than scaling the table. I also used array package to give extra padding below the horizontal lines, and used table rather than table*. Please always give complete documents showing the class and all packages used. The font size here is suitable for article class A4, but may not be right for other page sizes, but your example does not give that information.
\documentclass[a4paper]{article}
\usepackage[T1]{fontenc}
\usepackage{array}
\begin{document}
\begin{table}
\centering
\caption{Cette table indique pour chaque méthode si elles sont statistiquement meilleures qu'un classeur ne prédisant que la classe dominante pour la base de données "Nombre d'enfants".}
\footnotesize
\setlength\tabcolsep{3pt}
\setlength\extrarowheight{3pt}
\smallskip
\begin{tabular}{|c|c|c|c|c|c|c|} % 7 colonnes
\hline
\textbf{Méthodes} & LapRLS & LapRKLS & LRreglog & Autolog & BagOfPath & RCTK % premiere colonne
\\ \hline
\textbf{>=0.5501} & 5 & 0 & 5 & 5 & 0 & 5
\\ \hline\hline
SVM & SVMmoran & SVMgeary & LogisticReg & Logmoran & Loggeary & MultiVarLog
\\ \hline
5 & 5 & 5 & 0 & 0 & 4 & 1
\\ \hline
\end{tabular}
\end{table}
\end{document}
|
[
"stackoverflow",
"0015102834.txt"
] | Q:
Web Config Transforms: Insert If Not Exists
I would like to apply a transformation if and only if a matched element does not exist in the target. Trying various xpath expressions using http://webconfigtransformationtester.apphb.com/ but no luck so far.
E.g. if the target web.config looks like this:
<configuration>
<system.web>
<compilation debug="true" />
</system.web>
</configuration>
then the output should look like this:
<configuration>
<connectionStrings>
<add name="MyCs" provider="System.Data.SqlClient" connectionString="" />
<add name="SomeOtherCs" provider="System.Data.SqlClient" connectionString="" />
</connectionStrings>
<system.web>
<compilation debug="true" />
</system.web>
</configuration>
But if the target looks like this:
<configuration>
<connectionStrings>
<add name="MyCs" provider="System.Data.IChangedIt" connectionString="my connection string here" />
</connectionStrings>
<system.web>
<compilation debug="true" />
</system.web>
</configuration>
then the result of the transformation should look like this:
<configuration>
<connectionStrings>
<add name="MyCs" provider="System.Data.IChangedIt" connectionString="my connection string here" />
<add name="SomeOtherCs" provider="System.Data.SqlClient" connectionString="" />
</connectionStrings>
<system.web>
<compilation debug="true" />
</system.web>
</configuration>
In other words, I just want to add the named connection string to configuration but let the administrator fill it in with his own values. I thought it would as simple as xdt:Transform="Insert" xdt:Locator="XPath(count(/configuration/connectionStrings)=0)" (to add a cs config section if none existed) but apparently not.
A:
Use xdt:Transform="InsertIfMissing" with the XmlTransform task in VS2012. It doesn't look like Microsoft has updated their documentation to reflect this yet.
A:
In my case xdt:Transform="InsertIfMissing" did not work without xdt:Locator="Match(name)"
A:
Try this alternative transformation for xdt:Transform="InsertIfMissing" :
<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<nodeToInsertIfMissing xdt:Transform="Insert" />
<nodeToInsertIfMissing xdt:Transform="Remove" xdt:Locator="XPath(/configuration/nodeToInsertIfMissing[2])" />
</configuration>
It should works following MSDN documentation:
Insert - adds the element that is defined in the transform file as a sibling to the selected element or elements. The new element is added at the end of any collection.
So, if the node already exists, we add the second one and then remove this node (2nd). Otherwise, we add the new, unique node but remove operation will fail.
Note: It seems not working with NuGet *.(un)install.xdt transformation. InsertIfMissing too.
|
[
"stackoverflow",
"0057715668.txt"
] | Q:
Different JSON (de)serialization configs on different endpoints using Spring WebFlux
My micro service needs to communicate with 2 different services over HTTP. 1 has an API contract with snake_case JSON, while the other uses camelCase. How can I configure WebFlux to deserialize and serialize JSON with a certain Jackson ObjectMapper on a set of functional endpoints, while use another one on different endpoints?
The WebFlux documentation shows how to wire in another ObjectMapper, but this applies to all the endpoints of my API. So right now either all my JSON in snake_case or in camelCase. Cant find any resource to solve this issue, but it must be doable right?
Update: to make it clear I want to configure the web server which receives the requests from other services, not the webclient for sending http requests myself. I know how to do the latter.
A:
Okay, so this is not the cleaned up solution, I will use this solution from our library, but the basic gist of my work around looks like this:
@Controller
public class Handler {
private ObjectMapper mapper;
public Handler(@Qualifier("snakeCaseWrapper") ObjectMapper mapper) {
this.mapper = mapper;
}
Mono<ServerResponse> returnUser(final ServerRequest request) {
//REQUEST DESERIALIZATION
var messageReader = new DecoderHttpMessageReader<>(new Jackson2JsonDecoder(mapper));
var configuredRequest = ServerRequest.create(request.exchange(), List.of(messageReader));
//RESPONSE SERIALIZATION
return configuredRequest.bodyToMono(UserDto.class)
.map(userDto -> {
try {
return mapper.writeValueAsString(userDto);
} catch (JsonProcessingException e) {
e.printStackTrace();
//properly handle the error here
return "";
}
})
.flatMap(json -> ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromObject(json))
);
}
}
This is the only way I could find to programatically choose which kind of ObjectMapper I want to use for a specific endpoint/handler method for request deserialization. For response serialization, the trick was to first use the ObjectMapper to serialize the response body to a String, and put that String into the response with BodyInserters.fromObject(json) .
It works, so I'm happy with it.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.