text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Two Aggregate Totals in One Group
I wrote a query in MongoDB as follows:
db.getCollection('student').aggregate(
[
{
$match: { "student_age" : { "$ne" : 15 } }
},
{
$group:
{
_id: "$student_name",
count: {$sum: 1},
sum1: {$sum: "$student_age"}
}
}
])
In others words, I want to fetch the count of students that aren't 15 years old and the summary of their age. The query works fine and I get two data items.
In my application, I want to do the query by Spring Data.
I wrote the following code:
Criteria where = Criteria.where("AGE").ne(15);
Aggregation aggregation = Aggregation.newAggregation(
Aggregation.match(where),
Aggregation.group().sum("student_age").as("totalAge"),
count().as("countOfStudentNot15YearsOld"));
When this code is run, the output query will be:
"aggregate" : "MyDocument", "pipeline" :
[ { "$match" { "AGE" : { "$ne" : 15 } } },
{ "$group" : { "_id" : null, "totalAge" : { "$sum" : "$student_age" } } },
{ "$count" : "countOfStudentNot15YearsOld" }],
"cursor" : { "batchSize" : 2147483647 }
Unfortunately, the result is only countOfStudentNot15YearsOld item.
I want to fetch the result like my native query.
A:
If your're asking to return the grouping for both "15" and "not 15" as a result then you're looking for the $cond operator which will allow a "branching" based on conditional evaluation.
From the "shell" content you would use it like this:
db.getCollection('student').aggregate([
{ "$group": {
"_id": null,
"countFiteen": {
"$sum": {
"$cond": [{ "$eq": [ "$student_age", 15 ] }, 1, 0 ]
}
},
"countNotFifteen": {
"$sum": {
"$cond": [{ "$ne": [ "$student_age", 15 ] }, 1, 0 ]
}
},
"sumNotFifteen": {
"$sum": {
"$cond": [{ "$ne": [ "$student_age", 15 ] }, "$student_age", 0 ]
}
}
}}
])
So you use the $cond to perform a logical test, in this case whether the "student_age" in the current document being considered is 15 or not, then you can return a numerical value in response which is 1 here for "counting" or the actual field value when that is what you want to send to the accumulator instead. In short it's a "ternary" operator or if/then/else condition ( which in fact can be shown in the more expressive form with keys ) you can use to test a condition and decide what to return.
For the spring mongodb implementation you use ConditionalOperators.Cond to construct the same BSON expressions:
import org.springframework.data.mongodb.core.aggregation.*;
ConditionalOperators.Cond isFifteen = ConditionalOperators.when(new Criteria("student_age").is(15))
.then(1).otherwise(0);
ConditionalOperators.Cond notFifteen = ConditionalOperators.when(new Criteria("student_age").ne(15))
.then(1).otherwise(0);
ConditionalOperators.Cond sumNotFifteen = ConditionalOperators.when(new Criteria("student_age").ne(15))
.thenValueOf("student_age").otherwise(0);
GroupOperation groupStage = Aggregation.group()
.sum(isFifteen).as("countFifteen")
.sum(notFifteen).as("countNotFifteen")
.sum(sumNotFifteen).as("sumNotFifteen");
Aggregation aggregation = Aggregation.newAggregation(groupStage);
So basically you just extend off of that logic, using .then() for a "constant" value such as 1 for the "counts", and .thenValueOf() where you actually need the "value" of a field from the document, so basically equal to the "$student_age" as shown for the common shell notation.
Since ConditionalOperators.Cond shares the AggregationExpression interface, this can be used with .sum() in the form that accepts an AggregationExpression as opposed to a string. This is an improvement on past releases of spring mongo which would require you to perform a $project stage so there were actual document properties for the evaluated expression prior to performing a $group.
If all you want is to replicate the original query for spring mongodb, then your mistake was using the $count aggregation stage rather than appending to the group():
Criteria where = Criteria.where("AGE").ne(15);
Aggregation aggregation = Aggregation.newAggregation(
Aggregation.match(where),
Aggregation.group()
.sum("student_age").as("totalAge")
.count().as("countOfStudentNot15YearsOld")
);
| {
"pile_set_name": "StackExchange"
} |
Q:
Product Count & Product Image
I am installing Marketplace Extension In My website. In Product details Page i need to display seller Product count(How many Product have) and display 3 image of current Seller.
This is my .phtml
$active = Mage::getStoreConfig('marketplace/admin_approval_seller_registration/displayproductpage');
if ($active == 1) {
$productId = Mage::registry('current_product')->getEntityId();
$sellerId = Mage::registry('current_product')->getSellerId();
$sellerData = $this->sellerdisplay($sellerId);
$showProfile = $this->sellerprofiledisplay($sellerId);
if ($showProfile == 1) {
$targetPath = 'marketplace/seller/displayseller/id/' . $sellerId;
$mainUrlRewrite = Mage::getModel('core/url_rewrite')->load($targetPath, 'target_path');
$getRequestPath = $mainUrlRewrite->getRequestPath();
$getRequestPath = Mage::getUrl($getRequestPath);
?>
<div class="linker_seller">
<strong><?php echo $this->__('Seller'); ?></strong>:
<a href='<?php echo $getRequestPath; ?>' class="link_seller"><?php echo $sellerData['store_title']; ?></a>
<address><?php $_countries = Mage::getResourceModel('directory/country_collection')->loadData()->toOptionArray(false); ?>
<?php foreach ($_countries as $_country) {?>
<?php if($sellerData['country'] == $_country['value']){?>
<?php $sellerCountry = $_country['label'];?>
<?php } ?>
<?php } ?>
<?php echo $sellerData['state'].','.$sellerCountry;?></address>
<p><?php $description = strip_tags($sellerData['description']);
$newLengthDescription = strlen( $description);
$newSubDescription = substr($description, 0, 160);
if ($newLengthDescription >= 160) {
$newDescriptionFix = $newSubDescription . "...";
} else {
$newDescriptionFix = $description;
}
echo $newDescriptionFix; ?>
<!-- Test -->
<?php
//Get seller data collection
/*$seller_collection = $this->getCollection();
$seller_count = count($seller_collection);*/
$collection = Mage::getResourceModel('catalog/product_collection');
$collection->addAttributeToFilter('status',1); //only enabled product
$collection->addAttributeToSelect('*'); //add product attribute to be fetched
$collection->addAttributeToFilter('seller_id',$sellerId);
$collection->addStoreFilter();?>
<div class="my-account-wrapper"><ul class="mp_all_sellers_container f-left" border="0" cellspacing="0" cellpadding="0">
<?php
foreach ($collection as $_seller_collection) {
$sellerId = $_seller_collection['entity_id'];
$get_requestPath = Mage::helper('marketplace/marketplace')->getSellerRewriteUrl($sellerId);?>
<li class="f-left">
<a class="mp_all_sellers_view" href="<?php echo $get_requestPath; ?>" title="<?php echo $_seller_collection['store_title']; ?>">
<?php if (strpos($_seller_collection['store_logo'], '.') && $_seller_collection['store_title'] != '') {?>
<img src="<?php echo Mage::getBaseUrl('media') . "marketplace/resized/" . $_seller_collection['store_logo']; ?>" style="vertical-align: middle;" />
<?php } elseif (!strpos($_seller_collection['store_logo'], '.') && $_seller_collection['store_title'] != '') { ?>
<img src="<?php echo $this->getSkinUrl('images/no-image-thumbnail.png'); ?>" style="vertical-align: middle;" />
<?php } ?>
</a>
<div class="store-title">
<a href="<?php echo $get_requestPath; ?>" title="<?php echo $_seller_collection['store_title']; ?>"><?php echo $_seller_collection['store_title']; ?></a>
<address><?php $country = Mage::getModel('directory/country')->loadByCode($_seller_collection['country']);
echo $_seller_collection['state'].','."<br/>".$country->getName();?>
</address>
<?php //echo $seller_id;?>
</div>
<!-- Products for the Sellers -->
<div class="seller-products">
<ul id = "seller-product-list" class="products-grid product_snipt f-left">
<?php
$sellerProductsCollection = $this->getSellerProducts($sellerId);
//echo count($sellerProductsCollection);
if (count($sellerProductsCollection) > 0) {
$a=1;
foreach($sellerProductsCollection as $sellerProduct){
if($a%3==0){
break;
}
?>
<li class="item <?php if($limit==0):?> bigimage<?php endif;?>">
<a href="<?php echo $sellerProduct->getProductUrl(); ?>">
<img class="product-image"
<?php if($limit==0):?>
src="<?php echo $this->helper('catalog/image')->init($sellerProduct, 'thumbnail')->resize(230); ?>"
<?php else:?>
src="<?php echo $this->helper('catalog/image')->init($sellerProduct, 'thumbnail')->resize(125); ?>"
<?php endif;?>
alt="<?php echo $this->stripTags($this->getImageLabel($sellerProduct, 'small_image'), null, true) ?>" />
</a>
</li>
<?php
$a++;}
}
?>
</ul>
<div class="productslist_bottom">
<p><span class="totalproducts"><?php echo count($sellerProductsCollection)?></span>
<span style="font-size:12px">product(s)</span></p>
<a href="<?php echo $get_requestPath; ?>" title="<?php echo $_seller_collection['store_title']; ?>"><?php echo $this->__("View More")?></a>
</div>
</div>
<!-- Products End here -->
</li>
<?php } ?>
</ul>
</div>
<!-- Test -->
<a href="<?php echo $getRequestPath; ?>"><?php echo $this->__('Read More'); ?></a></p>
<?php
$country = str_replace(" ", "+",$sellerData->getCountry());
$state = str_replace(" ", "+",$sellerData->getState());
$url = 'http://maps.google.com/maps/api/geocode/json?address="'.$state.'"&sensor=false®ion="'.$country.'"';
$response = file_get_contents($url);
$response = json_decode($response, true);
$lat = $long = '';
if(isset($response['results'][0]['geometry']['location']['lat'])){
$lat = $response['results'][0]['geometry']['location']['lat'];
}
if(isset($response['results'][0]['geometry']['location']['lng'])){
$long = $response['results'][0]['geometry']['location']['lng'];
}
if(!empty($lat) && !empty($long)){
?>
<script
src="http://maps.googleapis.com/maps/api/js?key=AIzaSyDY0kkJiTPVd2U7aTOAwhc9ySH6oHxOIYM&sensor=false">
</script>
<script>
var myCenter=new google.maps.LatLng(<?php echo $lat; ?>,<?php echo $long;?>);
function initialize()
{
var mapProp = {
center:myCenter,
zoom:5,
mapTypeId:google.maps.MapTypeId.ROADMAP
};
var map=new google.maps.Map(document.getElementById("googleMap"),mapProp);
var marker=new google.maps.Marker({
position:myCenter,
});
marker.setMap(map);
}
google.maps.event.addDomListener(window, 'load', initialize);
</script>
<div id="googleMap" style="height:150px"></div>
<?php }
/**
* Ends map functionality
*/
?>
<?php
$displaySeller = Mage::getModel('marketplace/sellerreview')->displayReview($sellerId);
$firstStar = $secondStar = $thirdStar = $fourthStar = $fifthStar = $advancedTotal = $ratingbar_color = 0;
/**
* ITERATING ALL RATINGS
*/
$advancedTotal=0;
foreach ($displaySeller as $individualStar) {
$advancedTotal = $advancedTotal + 1;
if ($individualStar['rating'] == 1) {
$firstStar = $firstStar + 1;
} elseif ($individualStar['rating'] == 2) {
$secondStar = $secondStar + 1;
} elseif ($individualStar['rating'] == 3) {
$thirdStar = $thirdStar + 1;
} elseif ($individualStar['rating'] == 4) {
$fourthStar = $fourthStar + 1;
} elseif($individualStar['rating'] == 5) {
$fifthStar = $fifthStar + 1;
}
}
/**
* CALCULATING INDIVIDUAL RATINGS
*/
$advancedOne = $advancedTwo = $advancedThree = $advancedFour = $advancedFive = 0;
if($advancedTotal >= 1){
$advancedOne = ($firstStar / $advancedTotal) * 100;
$advancedTwo = ($secondStar / $advancedTotal) * 100;
$advancedThree = ($thirdStar / $advancedTotal) * 100;
$advancedFour = ($fourthStar / $advancedTotal) * 100;
$advancedFive = ($fifthStar / $advancedTotal) * 100;
}
$positiveFeedBack = ($advancedFour + $advancedFive)/2;
$totalRatings = $firstStar + $secondStar + $thirdStar + $fourthStar + $fifthStar;
if($totalRatings != ''){
echo round($positiveFeedBack,1);
echo $this->__('% positive feedback. (');
echo number_format($totalRatings).' ';
echo $this->__('ratings )');
}
?>
<span class="title-sp"><?php $sellerProducts = $this->sellerproduct($sellerId);
$sellerProducts->addFieldToFilter('entity_id',array('neq' => $productId));
$sellerProducts->getSelect()->limit(4);
?>
<?php if(count($sellerProducts) >= 1){ ?>
<?php echo $this->__('Other products from this seller');?>
<?php foreach($sellerProducts as $_sellerProducts){ ?>
<?php if($productId != $_sellerProducts['entity_id']){
$productInfo = Mage::helper('marketplace/marketplace')->getProductInfo($_sellerProducts['entity_id']);
?></span>
<img src="<?php echo Mage::helper('catalog/image')->init($productInfo, 'small_image')->resize(50); ?>" width="50" height="50" />
<?php } } ?>
<a href='<?php echo $getRequestPath; ?>' class="btn_more"><?php echo $this->__('More')?></a>
<?php } ?>
</div>
<?php } ?>
<?php } ?>
A:
$collection = Mage::getResourceModel('catalog/product_collection');
$collection->addAttributeToFilter('status',1); //only enabled product
$collection->addAttributeToSelect('*'); //add product attribute to be fetched
$collection->addAttributeToFilter('seller_id',$sellerId);
$collection->addStoreFilter();
here is you collection add count($collection);
$a=1;
foreach($collection as $product)
{ //show you product here
if($a%3==0) { break;}
$a++;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Using enums in Swift
I am new to swift, after programming years with Objective-C.
I declare this on a file
public enum Identifier {
case car, boat, toy, water
}
From another class I do:
var type : Identifier = Identifier.car
ERROR: Use of undeclared Identifier
I also tried
class MyTypes {
enum Identifier {
case car, boat, toy, water
}
}
and then
var type : MyTypes = MyTypes.Identifier.car
How do I use that?
A:
Possible sources of the problem include that the file hasn’t been saved yet or that the file hasn’t been included in your Xcode project’s target. It would appear that the latter issue is the problem here.
By the way, your second example, defined within MyTypes, should be declared as follows:
var type: MyTypes.Identifier = .car
Or as:
var type = MyTypes.Identifier.car
| {
"pile_set_name": "StackExchange"
} |
Q:
Many Fields of Different Types in a Class. Design As Collection?
In am developing a program that shows data from a game being played in its GUI. I have therefore made a Player class with many fields such as, _hp, _maxHp, _mp, _maxMp, _tp, _maxTp, _summonHp, _xCoordinate, _yCoordinate, _zCoordinate, etc.. This class reads from memory the required data in a while(true) loop that updates the GUI if the value has changed.
How would you recommend storing these fields in the player class? Should I put them in some kind of dictionary, or simply have them each as their own private field.
Note: They have different types (some are int, some are float, some are ushort and some are string).
Thanks a lot for any ideas.
A:
You have some inherent structure in your fields that you are not reflecting in your class design. Consider something like
public class Health
{
public int Hp { get; set; }
public int MaxHp { get; set; }
}
public class Location // Or use an existing Point3D implementation.
{
public double X { get; set; }
public double Y { get; set; }
public double Z { get; set; }
}
public class Player
{
public Location Location { get; set; }
public Health Health { get; set; }
}
You can change the scope of the classes as needed for your overall design (e.g. private or internal).
| {
"pile_set_name": "StackExchange"
} |
Q:
onChange event triggered twice within ReactJS project
Can someone explain why this event is fired off twice?
Here is my mainContent Component
class MainContent extends React.Component {
constructor() {
super()
this.state = {
todos: ToDosData
}
this.handleChange = this.handleChange.bind(this)
}
handleChange(id) {
this.setState(prevState => {
const updatedToDos = prevState.todos.map(todo => {
if (todo.id === id) {
console.log(!todo.completed)
todo.completed = !todo.completed
}
return todo
})
console.log(updatedToDos)
return {
todos: updatedToDos
}
})
}
render() {
const mainBodyStyles = {
color: "#FF8C00",
backgroundColor: "#fG7B02",
}
const todoItems = this.state.todos.map(item =>
<TodoItem
key={item.id}
item={item}
handleChange={this.handleChange}
/>)
return (
<div style={mainBodyStyles}>
{todoItems}
</div>
)
}
here is my toDo component
function TodoItem(props) {
return (
<div>
<input
type="checkbox"
checked={props.item.completed}
onChange={() => props.handleChange(props.item.id)}
/>
<p>{props.item.text}</p>
</div>
)
}
When I click on a checkbox It runs the event function twice. I can't wrap my head around what I am doing wrong. Thanks in advance.
A:
It is expected that the state updaters are called twice in React.Strict mode. Refer to this answer
Why is my function being called twice in React?
Update your handleChange function to below.
handleChange(id) {
const todos = [...this.state.todos];
const newTodo = todos.map((todo) => {
if (todo.id === id) {
console.log(!todo.completed);
todo.completed = !todo.completed;
}
return todo;
});
console.log(newTodo);
this.setState({ todos: newTodo });
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Stable pairing algorithm with only set of priorities?
Consider the Stable Marriage Algorithm:
In mathematics and computer science, the stable marriage problem (SMP) is the problem of finding a stable matching between two sets of elements given a set of preferences for each element. A matching is a mapping from the elements of one set to the elements of the other set. A matching is stable whenever it is not the case that both:
The stable marriage algorithm is a complete and optimal solution to the stable marriage problem.
However, I have a different, yet similar, problem. I need an algorithm that, when given a pair of elements, will find a stable and optimal pairing between them. The catch is that in my problem, only one pair of the elements has preferences, the other side doesn't care.
To bring a real life analogy to this, consider the problem of job assigning:
In a group software engineering project, there are m employees and n
different tasks to be accomplished. Each employee has his/her own
experiences and expertise so cares about which task he/she gets to
work on. The manager asks each employee to write down a preference
list of the tasks, ranking each task. What would be an algorithm to
pair each employee with ONE task, so that employee satisfaction is
maximized.
If n > m, there will be left over tasks, this is ok, they can be
completed by interns or contractors.
Note: one easy way to quantize employee satisfaction is by simply adding up the rankings of the jobs that each employee got.
For example: if employee a got his first choice, and employee b got her third choice, and employee c got his 2nd choice, the overall employee satisfaction is 1 + 3 + 2 = 6.
Minimizing this number will maximize satisfaction.
A:
This is known as the assignment problem. The textbook example is transportation: n number of packages need to be transferred while there are only m drivers(m < n) and where there is a cost associated with each transport. I believe your problem can be cast into that form.
The most common algorithm to solve this is the Kuhn-Menkres algorithm, also known as the Hungarian algorithm. This algorithm is available online in many programming languages, so google and go forth!
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I concatenate strings in a bash script?
How can I concatenate strings and variables in a shell script?
stringOne = "foo"
stringTwo = "anythingButBar"
stringThree = "? and ?"
I want to output "foo and anythingButBar"
A:
Nothing special, you just need to add them to your declaration.
For example:
stringOne="foo"
stringTwo="anythingButBar"
stringThree=$stringOne$stringTwo
echo $stringThree
fooanythingButBar
if you want the literal word 'and' between them:
stringOne="foo"
stringTwo="anythingButBar"
stringThree="$stringOne and $stringTwo"
echo $stringThree
foo and anythingButBar
A:
If instead you had:
stringOne="foo"
stringTwo="anythingButBar"
stringThree="%s and %s"
you could do:
$ printf "$stringThree\n" "$stringOne" "$stringTwo"
foo and anythingButBar
| {
"pile_set_name": "StackExchange"
} |
Q:
Show that 101 is a prime number given the fact that 10 squared is 100 and squared of 11 is 121.
Question: Use the fact that $10^2 = 100$ and $11^2 = 121$ to show that the number 101 is a prime number.
Could someone please given me a hint on how to solve this problem. I can't seem to relate $11^2=121$ to the fact that 101 is a prime number. I thought of using theorem of gcd or modular arithmetic, but I don't know where to start.
A:
Here is one way to use both the facts you were given to start with, assuming that you understand that $11$ is a prime number and the prime factorization of $10$ is $2\cdot 5$.
Since $11^2=121>101$, you know that the product of any set of $2$ or more primes, all of which are $\ge 11$ will be $\ge 121 > 101$, so looking at such sets of prime factors is ruled out.
You (should) also know that for any two consecutive integers $\gcd(n,n+1)=1$. Since $100=10^2=2^2\cdot 5^2$, the primes ($2,5$), being factors of $100$, cannot be factors of $101$.
So if $101$ is composite, it must have at least one prime factor smaller than $11$, but not including $2$ or $5$. The only possible candidates that meet those restrictions are $3$ and $7$. But $3\not \mid 101$ and $7\not \mid 101$. Ergo, $101$ cannot be composite, which means that it is prime.
| {
"pile_set_name": "StackExchange"
} |
Q:
Iteration variable goes out of range in for-loop
I have a for-loop in my function:
for (int i = vector1.size() - 1, j = vector2.size() - 1;i >= vector1.size() - Get_polynomial_power(vector1) - 1;--i, --j) {
// some code
something = vector1.at(i); // <- here i goes out of range
}
The problem is that iteration variable i goes out of range.
Condition for exiting from loop is set with i >= vector1.size() - Get_polynomial_power(vector1) - 1; which equals to i >= 0 in my specific case (Get_polynomial_power is user defined function that returns some int value).
And the last value of i is -1. So the program terminates with thrown exception "out of range".
But if I set condition for exiting the loop directly with 0, so it looks like:
for (int i = vector1.size() - 1, j = vector2.size() - 1;i >= 0;--i, --j) {
// some code
something = vector1.at(i); // <- here i doesn't go out of range
}
then there is no error and the last value of i is 0 so it doesn't go out of range.
So the question is: why doesn't iteration variable goes out of range if condition for exit from loop is set directly via 0, but if it is set with a statement that is equals to 0 then the error appears?
I tried a solution with loop-local value as was told in other similar questions, like:
for (int i = vector1.size() - 1, j = vector2.size() - 1;i >= vector1.size() - Get_polynomial_power(vector1) - 1;--i, --j) {
// some code
int index = i;
something = vector1.at(index);
}
but it didn't change the thing.
Just in case, here is full code of my function:
vector<int> Polynomial_addition_substraction(vector<int> polynomial1, vector<int> polynomial2) {
vector<int> addition_substraction_result;
vector<int> vector1;
vector<int> vector2;
if (Get_polynomial_power(polynomial1) >= Get_polynomial_power(polynomial2)) {
vector1 = polynomial1;
vector2 = polynomial2;
}
else {
vector1 = polynomial2;
vector2 = polynomial1;
}
for (int i = vector1.size() - 1, j = vector2.size() - 1;i>=vector1.size()-Get_polynomial_power(vector1)-1;--i,--j) {
if (j < 0) {
addition_substraction_result.insert(addition_substraction_result.begin(), vector1.at(i));
}
else {
addition_substraction_result.insert(addition_substraction_result.begin(), vector1.at(i) ^ vector2.at(j));
}
}
return addition_substraction_result;
}
A:
The issue is due to .size() evaluating to an unsigned type and the subsequent subtraction happening therefore in unsigned arithmetic. (The int in an expression containing an unsigned and and int is converted to an unsigned.) This effect is particularly noticeable if the vector is empty.
So wraparound effects take the expression value to a large unsigned number, and that yields a subscripting error.
The solution is to write an expression of the form
i > size() - a
as
i + a > size()
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I draw a rectangle or sprite onto another sprite in xna?
I'm working on a 2d game in xna. Basically I want to draw a rectangle or sprite on another sprite with collision. I can detect the collision and everything, but I don't know how I can draw on the sprite. I'm drawing the rectangle on the sprite, but it's drawing behind the sprite which makes the rectangle invisible. Is there way I can accomplish this?
A:
Here is an example of layer ordering:
public enum TextureName : byte
{
Black,
Yellow
}
private GraphicsDeviceManager graphics;
private SpriteBatch spriteBatch;
public struct Sprite
{
public Vector2 Position;
public Texture2D Texture;
public Sprite(Texture2D texture, Vector2 position)
{
Position = position;
Texture = texture;
}
}
public Dictionary<TextureName, Sprite> Sprites { get; private set; }
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
Sprites = new Dictionary<TextureName, Sprite>();
Vector2 position = new Vector2(graphics.PreferredBackBufferWidth / 2 - 64, graphics.PreferredBackBufferHeight / 2 - 64);
Sprites.Add(TextureName.Black, new Sprite(Content.Load<Texture2D>(@"black_sprite"), position));
Sprites.Add(TextureName.Yellow, new Sprite(Content.Load<Texture2D>(@"yellow_sprite"), position + new Vector2(32, 32)));
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend);
spriteBatch.Draw(Sprites[TextureName.Black].Texture, Sprites[TextureName.Black].Position, null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0);// -> will be drawn first
spriteBatch.Draw(Sprites[TextureName.Yellow].Texture, Sprites[TextureName.Yellow].Position, null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0);// -> will be drawn second
spriteBatch.End();
base.Draw(gameTime);
}
So the black sprite will be behind.
You can reorder your Sprites with ease using a SpriteSortMode of the SpriteBatch.Begin() method and setting layer depth for SpriteBatch.Draw() method as shown below.
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.AlphaBlend);
spriteBatch.Draw(Sprites[TextureName.Black].Texture, Sprites[TextureName.Black].Position, null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 1.0f/* layer depth*/);
spriteBatch.Draw(Sprites[TextureName.Yellow].Texture, Sprites[TextureName.Yellow].Position, null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0.0f/* layer depth*/);
spriteBatch.End();
base.Draw(gameTime);
}
The output will be:
If you are using different sprite batches from other classes. The first example will work fine. Just take care about placing your "YourClass.Draw()" in right order into the main game draw method.
Be aware of using second example. It works much slower than first.
| {
"pile_set_name": "StackExchange"
} |
Q:
What kind of query in LINQ do I have to make to get a custom List that contains Lists of objects?
What I'm trying to do is to find the best and fastest way to obtain a collection of objects from a query so I can parse them into JSON and send them through a WCF webservice. I have an entity that is related to two other entities, the properties as shown in my edmx are like this:
Event
EventId
Date
Acceleration
Intensity
DeviceId
BlockId
Device
Block
Device
DeviceId
Alias
ClusterId
Cluster
Events
Block
BlockId
DateStart
DateEnd
Events
What I want to get List of Lists of the Events associated to every Block, each Block's Events being represented by a List, but I only want to get the EventId, Date, DeviceId, BlockId, Acceleration and Intensity of every Event, because I want to avoid the circular reference that would be caused for trying to parse the Device and Block properties in each Event object.
I tried something like
var result = (from d in context.Block select (d.Events)).ToList();
but this only returns a List containing Event objects in every Block, but I don't know how to get the information I specified of every Event in every Block.
How can I specify in my query the information I want to retrieve?
A:
I think it will be easier using Method Based query:
var result = context.Block.Select(b => b.Events.Select(e => new
{
e.EventId,
e.Date,
e.DeviceId,
e.BlockId,
e.Accelleration,
e.Intensity
}).ToList()).ToList();
| {
"pile_set_name": "StackExchange"
} |
Q:
Fancy part name in TOC
I was just trying to add some fancy part name in TOC using a code I got from a post in this site. Here is what I added.
\documentclass{book}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{kpfonts}
\usepackage{tikz}
\usepackage{titletoc}
%------------------------------------------
\contentsmargin{0cm}
%------------------------------------------
\titlecontents{part}[0pc]
{\addvspace{13pt}%
\begin{tikzpicture}%
\draw[help lines,step=.4cm,color=blue] (0,0) grid (2.4,1.2);%
\pgftext[left,x=.1cm,y=.6cm]{\Large\sc \partname};%
\fill[fill=white,draw=blue] (1.8,.6) circle (0.4cm);%
\pgftext[x=1.8cm,y=.6cm]{\protect\thepart};%
\end{tikzpicture}\\\color{blue}\large\sc\bfseries}%
{}
{}
{\;\titlerule\;\large\bfseries \thecontentspage}%
%------------------------------------------
\titlecontents{chapter}[0pc]
{\addvspace{30pt}%
\begin{tikzpicture}%
\draw[help lines,step=.4cm,color=red] (0,0) grid (3.2,1.2);%
\pgftext[left,x=.1cm,y=.6cm]{\Large\sc chapter};%
\fill[fill=white,draw=red] (2.7,.6) circle (0.35cm);%
\pgftext[x=2.7cm,y=.6cm]{\thecontentslabel};%
\end{tikzpicture}\\\color{red}\large\sc\bfseries}%
{}
{}
{\;\titlerule\;\large\bfseries \thecontentspage}%
%------------------------------------------
\titlecontents{section}[2.4pc]
{\addvspace{1pt}}
{\contentslabel[\thecontentslabel]{2.4pc}}
{}
{\hfill\small \thecontentspage}
%------------------------------------------
[]
\titlecontents*{subsection}[4pc]
{\addvspace{-1pt}\small}
{}
{}
{\ --- \small\thecontentspage}
[ \textbullet\ ][]
%-------------------------
\begin{document}
\tableofcontents
\part{Part One}
\chapter{(title chapter 1)}
\section{(title section 1)}
\subsection{(title sub-section 1)}
\subsection{(title sub-section 2)}
\section{(title section 2)}
\subsection{(title sub-section 1)}
\subsection{(title sub-section 2)}
%--------------------------
\end{document}
Here is what I get:
My problem:
The part number I is not coming inside the circle after PART instead it is coming below with the part name.
How to get the part number inside the circle like in chapter? What else I have to add?
A:
The redefinitions for part using titletoc will work as soon as you use the newparttoc option for the titlesec package and provide a suitable redefinition for \part using titlesec; in the following example (just for the example's sake) I gave a quick redefinition of \part using titlesec just to illustrate the effect on the part entries in the ToC:
\documentclass{book}
\usepackage{tikz}
\usepackage[newparttoc]{titlesec}
\usepackage{titletoc}
%------------------------------------------
\contentsmargin{0cm}
%------------------------------------------
\titleformat{\part}[display]
{\normalfont\huge\bfseries}{\thepart}{20pt}{\Huge}
\titlecontents{part}[0pc]
{
\protect\addvspace{13pt}%
\begin{tikzpicture}%
\draw[help lines,step=.4cm,color=blue] (0,0) grid (2.4,1.2);%
\pgftext[left,x=.1cm,y=.6cm]{\Large\scshape\partname};%
\fill[fill=white,draw=blue] (1.8,.6) circle (0.4cm);%
\node at(1.8cm,.6cm) {I};%
\end{tikzpicture}\\\color{blue}\large\scshape\bfseries
\thepart}%
{}
{l}
{$\;$\titlerule$\;$\large\bfseries\thecontentspage}%
\begin{document}
\tableofcontents
\part{Part One}
\end{document}
| {
"pile_set_name": "StackExchange"
} |
Q:
Postgres Insert without ANY VALUES FOR COLUMNS. ALL ARE DEFAULT
I have a table in Postgres that only has default column values (id, created_at).
The only way I can insert into this table is
INSERT INTO pages(id) VALUES(DEFAULT) RETURNING id;
Why can't I do this:
INSERT INTO pages RETURNING id;
Just curious.
A:
You can use either :
INSERT INTO test DEFAULT VALUES returning id;
And all the explanations you want are right here : https://www.postgresql.org/docs/current/sql-insert.html
The syntax of PostgreSQL is quite imposed.
DEFAULT VALUES :
All columns will be filled with their default values. (An OVERRIDING clause is not permitted in this form.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Help with differentiation under the integral and complex analysis
I'm working on tough integrals that basically contain a fraction inside them. Here's a (simplified) example:
$$\int_{-\pi}^\pi{\frac{1+e^{i t}}{e^{i t}}dt}$$
I'm interested in solving this using differentiation under the integral, and I'm hoping that someone can help me. My work so far follows...
First, I restate the integral as follows:
$$-i\int_{-\pi}^\pi{\frac{1+e^{i t}}{e^{i t}}\frac{ie^{i t}}{e^{i t}}dt}=
-i\int_{|z|=1}{\frac{1+z}{z}\frac{1}{z}dz}$$
Here's where we can use differentiation under the integral, even if it seems too much for such a simple problem.
The integral is now:
$$-i\int_{|z|=1}{\frac{1+z}{z^2}dz}$$
and the fractions in some of the problems that I'm working on will be extremely hard to work with. So I set up a new function:
$$F(y)=-i\int_{|z|=1}{(1+z)\sin{\left(\frac{y}{z^2}-\frac{1}{z^2}\right)}dz}$$
Note that taking the derivative w.r.t. $y$ yields the result I'm looking for, if $y$ is then set equal to $1$:
$$F'(y)=-i\int_{|z|=1}{\frac{1+z}{z^2}\cos{\left(\frac{y}{z^2}-\frac{1}{z^2}\right)}dz} $$
$$F'(1)=-i\int_{|z|=1}{\frac{1+z}{z^2}\cos{0}dz}=F'(1)=-i\int_{|z|=1}{\frac{1+z}{z^2}dz}$$
This method then seems extremely easy to do, even with far more complicated divisors in the fractions, since I can use Cauchy to integrate, and then proceed with differentiation under the integral. I'd like to know if this seems correct so far. I'm wondering if there are additional considerations I need to be aware of before getting too complex. But it seems that I have a chance of "building" integrals in this way, so I want to be sure that I'm correct.
If this seems correct and checks out, I'd really like to know, and maybe this is too much for this question, if I can use this with multiple fractions and multiple variables. For instance, would a function like $\int{\frac{p(x)}{q(x)r(x)}}$ be easy to adapt to this method?
A:
You state that your integral is equal to $-i\int_{|z|=1}\frac{1+z}{z}\frac{1}{z}\mathrm{d}z$. This is $-i\int_{|z|=1}\left(\frac{1}{z^2}+\frac{1}{z}\right)\mathrm{d}z$. Since the residue of $z^{-2}$ is $0$, and the residue of $z^{-1}$ is $1$, we get that
$$
-i\int_{|z|=1}\left(\frac{1}{z^2}+\frac{1}{z}\right)\mathrm{d}z=-i(0+2\pi i)=2\pi
$$
Another way of evaluating your integral is
$$
\begin{align}
\int_{-\pi}^\pi\frac{1+e^{it}}{e^{it}}\mathrm{d}t
&=\int_{-\pi}^\pi\left(1+e^{-it}\right)\mathrm{d}t\\
&=2\pi+0
\end{align}
$$
Since the integral of $e^{-it}$ over a complete period is $0$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I combine the coordinate pairs of an array into a single index?
I have an array
A = [3, 4; 5, 6; 4, 1];
Is there a way I could convert all coordinate pairs of the array into linear indices such that:
A = [1, 2, 3]'
whereby (3,4), (5,6), and (4,1) are represented by 1, 2, and 3, respectively.
Many thanks!
The reason I need is because I need to loop through array A such that I could make use of each coordinate pairs (3,4), (5,6), and (4,1) at the same time. This is because I will need to feed each of these pairs into a function so as to make another computation. See pseudo code below:
for ii = 1: length(A);
[x, y] = function_obtain_coord_pairs(A);
B = function_obtain_fit(x, y, I);
end
whereby, at ii = 1, x=3 and y=4. The next iteration takes the pair x=5, y=6, etc.
Basically what will happen is that my kx2 array will be converted to a kx1 array. Thanks for your help.
A:
Adapting your code, what you want was suggested by @Ander in the comments...
Your code
for ii = 1:length(A);
[x, y] = function_obtain_coord_pairs(A);
B = function_obtain_fit(x, y, I);
end
Adapted code
for ii = 1:size(A,1);
x = A(ii, 1);
y = A(ii, 2);
B = function_obtain_fit(x, y, I); % is I here supposed to be ii? I not defined...
end
Your unfamiliarly with indexing makes me think your function_obtain_fit function could probably be vectorised to accept the entire matrix A, but that's a matter for another day!
For instance, you really don't need to define x or y at all...
Better code
for ii = 1:size(A,1);
B = function_obtain_fit(A(ii, 1), A(ii, 2), I);
end
| {
"pile_set_name": "StackExchange"
} |
Q:
Sort Dictionary which contains list of dictionaries by value
I've got a dictionary of the following format:
{"key1": [{"title":"bla bla", "percentage": "0.3493"},{"title":"bla bla bla", "percentage":"0.293"}],
"key2": [{"title":"bla bla", "percentage": "0.635"},{"title":"bla bla bla", "percentage":"0.987"}]}
So basically it is a dictionary which contains lists of dictionaries as values.
I want to sort this in descending order by the percentage field - so with the above example I would like to obtain:
{"key1": [{"title":"bla bla", "percentage": "0.3493"},{"title":"bla bla bla", "percentage":"0.293"}],
"key2": [{"title":"bla bla bla", "percentage": "0.987"},{"title":"bla bla", "percentage":"0.635"}]}
I would also like to obtain a global view of the highest percentages. For example:
"key2" : {"title":"bla bla bla", "percentage": "0.987"}
"key2" : {"title":"bla bla", "percentage": "0.635"}
"key1" : {"title":"bla bla", "percentage": "0.3493"}
"key1" : {"title":"bla bla bla", "percentage":"0.293"}
I've looked into various ways of sorting in Python, but I'm still not sure how to achieve this.
A:
We sort the list of values for each key in the dictionary on the value for key percentage in descending order, and then we use dictionary comprehension to recreate the dictionary
dct = {"key1": [{"title":"bla bla", "percentage": "0.3493"},{"title":"bla bla bla", "percentage":"0.293"}],
"key2": [{"title":"bla bla", "percentage": "0.635"},{"title":"bla bla bla", "percentage":"0.987"}]}
result = {key: sorted(value, key=lambda x:x['percentage'], reverse=True) for key, value in dct.items()}
print(result)
The output will be
{'key1': [{'title': 'bla bla', 'percentage': '0.3493'},
{'title': 'bla bla bla', 'percentage': '0.293'}],
'key2': [{'title': 'bla bla bla', 'percentage': '0.987'},
{'title': 'bla bla', 'percentage': '0.635'}]}
For the global view, we first update the inner dictionaries so we have the key attribute present there.
We then create the overall list of values (global view) by merging all list of values, and then sorting them on percentage in a descending order
dct = {"key1": [{"title":"bla bla", "percentage": "0.3493"},{"title":"bla bla bla", "percentage":"0.293"}],
"key2": [{"title":"bla bla", "percentage": "0.635"},{"title":"bla bla bla", "percentage":"0.987"}]}
#Update inner dictionaries with the name of the key for each dictionary
for key, value in dct.items():
for v in value:
v.update({'key':key})
global_view = sorted([v for value in dct.values() for v in value], key=lambda x:x['percentage'], reverse=True)
print(global_view)
The output here will be
[
{'title': 'bla bla bla', 'percentage': '0.987', 'key': 'key2'},
{'title': 'bla bla', 'percentage': '0.635', 'key': 'key2'},
{'title': 'bla bla', 'percentage': '0.3493', 'key': 'key1'},
{'title': 'bla bla bla', 'percentage': '0.293', 'key': 'key1'}
]
| {
"pile_set_name": "StackExchange"
} |
Q:
Rotate table in T-SQL
I have a table that contains sequential date in first column and type of date (CreatedOn and ClosedOn). I need to with SELECT that has 2 columns (CreatedOn, ClosedOn) from my table.
I have this:
| Date | ColumnName |
|------------|------------|
| 2017-01-01 | ClosedOn |
| 2017-01-02 | CreatedOn |
| 2017-01-03 | ClosedOn |
| 2017-01-04 | CreatedOn |
And I need to get this:
| CreatedOn | ClosedOn |
|------------|------------|
| NULL | 2017-01-01 |
| 2017-01-02 | 2017-01-03 |
| 2017-01-04 | NULL |
I've tried this:
SELECT
CASE [ColumnName]
WHEN 'CreatedOn' THEN [Date]
ELSE NULL
END,
CASE [ColumnName]
WHEN 'ClosedOn' THEN [Date]
ELSE NULL
END
FROM #Temp
but it doesn't work.
A:
Try this and hope it helps. You may have to test it and modify as needed. But the logic if my understanding is correct should be sufficient to build on.
;WITH cte_TestData(Date,ColumnName) AS
(
SELECT '2017-01-01','ClosedOn ' UNION ALL
SELECT '2017-01-02','CreatedOn' UNION ALL
SELECT '2017-01-03','ClosedOn ' UNION ALL
SELECT '2017-01-04','CreatedOn'
)
,cte_PreserveSeq AS
(
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS SeqID,Date,ColumnName
FROM cte_TestData
)
,cte_PreResult AS
(
SELECT *
,LEAD (ColumnName, 1,0) OVER (ORDER BY SeqID) AS NextColumnName
,LEAD (Date, 1,0) OVER (ORDER BY SeqID) AS NextDate
,LAG (ColumnName, 1,0) OVER (ORDER BY SeqID) AS PreviousColumnName
,LAG (Date, 1,0) OVER (ORDER BY SeqID) AS PreviousDate
FROM cte_PreserveSeq
)
SELECT DISTINCT
CASE
WHEN ColumnName = 'CreatedOn' AND NextColumnName = 'ClosedOn' THEN DATE
WHEN ColumnName = 'ClosedOn' AND PreviousColumnName = 'CreatedOn' THEN PreviousDate
WHEN ColumnName = 'CreatedOn' THEN DATE
ELSE NULL
END AS CreatedOn,
CASE
WHEN ColumnName = 'CreatedOn' AND NextColumnName = 'ClosedOn' THEN NextDate
WHEN ColumnName = 'ClosedOn' THEN DATE
ELSE NULL
END AS ClosedOn
FROM cte_PreResult
| {
"pile_set_name": "StackExchange"
} |
Q:
Netbeans project file generation extremely slow (unusable)
I've been having issues with a Java project that I've been working on for a while.
Starting about 1 or 2 weeks ago, whenever I use Netbeans (8.0.2) to generate a new file in the project (right click on package > new file), the wizard will hang for up to 10 minutes before releasing control back to me. The file is created after about 5 minutes. This doesn't happen with any other project, only this one; but I can't find anything different in my project's configuration compared to projects that work.
I created a bug report about this on the Netbeans bug tracker, but it hasn't been looked at in over a week. It has a copy of the Netbeans output log, and a profiling snapshot of the class generation.
I've tried reinstalling Netbeans (remaining at 8.0.2), which didn't help, and I don't really know what else I can do to locate the problem. If anyone has experienced anything like this, or has any advice on how I can track down the issue, it would be greatly appreciated.
Here is a link to my project on Dropbox. Feel free to download a copy, compile it, run it, etc.
I am using Windows 7 64-bit, and I am using the official Netbeans 8.0.2 from netbeans.org, launched straight from the desktop (I am not using any particular command line arguments or enviroment variables, as far as I know)
A:
It turned out that the issue was that my Mercurial client was hanging when it made status calls, and Netbeans, due to a bug, was stuck waiting for it forever.
The issue with Mercurial can be worked around by deleting the Mercurial log file, and the bug with Netbeans was eventually fixed.
| {
"pile_set_name": "StackExchange"
} |
Q:
python callback function with arguments bad practice?
I wrote a listener class that executes a programmer specified callback.
The msg is provided as a callback argument. I realized that a programmer using the class will need to look at my code to see the structure of the msg.
Is there a way of providing type hints to callback functions. Or should i refactor my callback to be a notification of a message so the programmer can invoke a get function to get the actual msg after the notification. ( IDE help works in this case but the class is a little bit harder to use )
Alternatively i could just pass something generic in the argument like dictionary.
A:
Regarding callback argument you can use python's typing module (docs). It'll be something like Callable which subscription syntax must always be used with exactly two values: the argument list and the return type:
def listener(callback: Callable[[int, str], int]):
#do magic here
def callable(a: int, b: str) -> int:
# user defined callable here
In this case, IDEs (like Pycharm and modern Jedi versions) will detect Callable typehints, and listener user won't have to look up callback structure.
Regarding if it's better to provide a dictionary or something else more generic is debatable and depends on actual use cases.
P.S. of course, typing module is not available out of the box in python2
| {
"pile_set_name": "StackExchange"
} |
Q:
Sliding menus: how to deal with actually changing view controllers
It seems 50% of all iPhone apps are using Facebook-like sliding menus these days. I've also created a few apps with this UI, using the ViewDeck library (https://github.com/Inferis/ViewDeck). The left view is a UITableView, clicking on an item changes the center view.
I've been struggling with good "menu management" though. Do you create an NSArray with all the view controllers? Is it better to lazy load one at a time? How do you deal with memory? Not really sure what the best way is while keeping memory usage as low as possible.
When I look at these sliding menu libraries, there's never a full fledged example app with a working menu and multiple controllers. Like I said, I've created a couple apps using ViewDeck, but the actual changing of the view controllers always feels clunky and not optimal at all (array with all instantiated view controllers).
A:
I use an Array for view controllers not for views. The views are loaded, when user selects the cell which points that view controller. So it is lazy loading. If you think you need to be careful about memory, then on memory warning you can release the view controllers which you do not need for now.
Of course it depends what do you have in that controllers but generally (standard UI) you don't need to release them. I have never needed.
| {
"pile_set_name": "StackExchange"
} |
Q:
Building a continuous relationship curve between cutoff and percentages
I have raw data where I want to see what kind of cutoff level results in what percentage of observations above the cutoff level. Here is the simulation:
data<-rnorm(100,50,30)
prop.table(table(data>10))
prop.table(table(data>20))
prop.table(table(data>30))
prop.table(table(data>40))
prop.table(table(data>50))
prop.table(table(data>60))
prop.table(table(data>70))
prop.table(table(data>80))
prop.table(table(data>90))
Here is the output:
FALSE TRUE
0.1 0.9
FALSE TRUE
0.16 0.84
FALSE TRUE
0.29 0.71
FALSE TRUE
0.36 0.64
FALSE TRUE
0.51 0.49
FALSE TRUE
0.61 0.39
FALSE TRUE
0.75 0.25
FALSE TRUE
0.86 0.14
FALSE TRUE
0.91 0.09
But it is a crude and inefficient way as you would agree. Instread of calculating respective percentage for each cutoff value endlessly, I wanted to build a plot that represents that relationship where X axis would represent the range of the all possible cutoff levels, and Y axis representing percentages from 0 to 100. Something similar to this:
Please ignore the axis labels etc of the plot, this is only to provide a general example. Any suggestions?
A:
I believe you are looking for the ecdf() function to create an empirical cumulative distribution function.
data<-rnorm(1000,50,30)
a = ecdf(data)
plot(a)
example
A:
You write:
I have raw data where I want to see what kind of cutoff level results
in what percentage of observations above the cutoff level.
Taking what you write literally, then you want the proportion of observations above the cutoff. Say the cutoff is X. The empirical CDF gives you the value P(x <= X), i.e. the proportion below the cutoff. If you want the value corresponding to P(x > X), you can use the equality P(x > X) = 1-P(x <= X).
For instance:
data<-rnorm(100,50,30) # your data
dat <- data.frame(x = sort(data)) # into sorted dataframe
dat$ecdf <- ecdf(data)(dat$x) # get cdf values for each x value
dat$above <- with(dat, 1-ecdf) # get values above
plot(dat$x, dat$above)
Having said all this, you are presenting the ECDF of a Gaussian distribution after all, which may indicate that you are looking for the ECDF instead. In this case, as already outlined in the Vincent's answers, you can just plot the corresponding values of ecdf instead of above. Here an example where I plot both.
To address your comment, I print one with a smooth line, using geom_smooth instead of geom_line.
library(ggplot2); library(scales)
ggplot(dat, aes(x=x)) +
geom_line(aes(y=ecdf), col="red" ) + # P(x<=X) in red
geom_smooth(aes(y=above), col="blue") + # Smooth version of P(x > X)
labs(y="Proportion", x="Variate") +
scale_y_continuous(labels=percent)
If you prefer the smoothed line to be printed without surrounding error intervals, you can add the option se=F. See ?geom_smooth-
To achieve something similar with base plot, you can use
plot(dat$x, dat$above, type="n")
lines(loess.smooth(dat$x, dat$above, span=1/6))
though you may have to play around with the span parameter. This will give the following image:
| {
"pile_set_name": "StackExchange"
} |
Q:
cURL does not work for specified site
Why does my curl not working correctly?
$ch = curl_init("http://www1.caixa.gov.br/loterias/loterias/megasena/megasena_resultado.asp");
curl_exec($ch);
curl_close($ch);
A:
I discover the solution. This link require authentication cookie:
<?php
$ch = curl_init();
$options = array(
CURLOPT_URL => 'http://www1.caixa.gov.br/loterias/loterias/megasena/megasena_pesquisa_new.asp',
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_MAXREDIRS => 50,
CURLOPT_COOKIE => 'ASPSESSIONIDCSSRTDCR=KMKMJHPDIHNGBBLJFNGDJKGK; security=true',
CURLOPT_RETURNTRANSFER => true
);
curl_setopt_array($ch, $options);
$senaHtml = curl_exec($ch);
curl_close($ch);
?>
| {
"pile_set_name": "StackExchange"
} |
Q:
Hide classes of library - Android
I have 3 projects that are used as libraries within a 4th (main project).
The 3 projects are complied within each other as follows (build.gradle):
Library Project:
Project A
compile project(":projectA")
compile project(":projectB")
Project B
compile project(':projectC')
Main Project:
compile(name: 'projectA', ext: 'aar')
compile(name: 'projectB', ext: 'aar')
compile(name: 'projectC', ext: 'aar')
I would like to do something to the "Library Project", so that from within the Main Project, if I click on any class from within the Library project, I should either not be able to see the code, or it should be encrypted.
So for example if there is InterfaceA in ProjectA, and the main activity of the Main Project implements that interface, if I "Ctrl-Click" into the interface, the result should be similar to what I specified above.
I understand Proguard does something similar, but that is only if you are building a release .apk, I need the same result for compiled libraries.
A:
Many projects use ProGuard to achieve this protection.
You can use Gradle to build library components for Android
Libraries, just like apps, can be build in development or release build types
Proguard can be configured to run on a component (app or library), but only in the release build type. See here: https://sites.google.com/a/android.com/tools/tech-docs/new-build-system/user-guide#TOC-Running-ProGuard
If the component is minified (highly advised), then you need to tell Progaurd what the "root" classes are, otherwise it will minify the library to literally nothing. This can be achieved by adding a rule to the configuration file:
-keep class your.package.name {public *;}
A more extensive example is here: http://proguard.sourceforge.net/manual/examples.html#library
However there are some limitations:
ProGuard's main use is is removing as much debug information, line numbers and names as possible from the bytecode without changing what the bytecode actually does. It replaces the names of members and arguments, non-public classes with meaningless ones, for example vehicleLicensePlate might become _a. As any code maintainer will relate, bad member and variable names make maintenance really hard.
ProGuard can (slightly) modify bytecode by optimising as much as possible (computing constants defined as expressions, playing around with inlining, etc. The optimisations are listed here: http://proguard.sourceforge.net/FAQ.html#optimization)
ProGuard does not encrypt the bytecode - the JVM needs to see the actual bytecode otherwise it could not run the program.
So, obfuscation only makes it harder to reverse-engineer and understand a library, it cannot make this task impossible.
One last pointer: ProGuard dumps a file containing a list of what it has changed, in particular the line numbers. When you get stack traces back from your customers (or through online tools like Crashlytics) you can revert the obfuscation so you can debug. In any release-build process, you need to find a way to save this file.
This file is also needed when you make incremental releases of your library so the obfuscation is consistent to the previously released version. If you don't, the customer cannot drop-in replace your library and will have to do a complete rebuild (and link) of their app.
While ProGuard is a free-n-easy option which just works, there are other free and paid-for obfuscators. Some offer a few more features, but they are fundamentally the same, and the compatibility of ProGuard with IDEs, tools and services is excellent.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is poll() better than select()?
It is said select() is not scalable because it needs to go over an array with the size of max num of file describers (FD): complexity O(max_num_FD). And it is said that poll() is better because it only goes over an array with size of num of active FD(): complexity O(num of active FD) what does active FD mean?
Is poll() a popular way used by large scale servers with many available data at a time? Usually which socket approach does a large scale server usually use in reality?
A:
Active FD means an open file descriptor.
Both select() and poll() are for single-threaded single-process programs to allow them to handle multiple connections at the same time. For instance OpenWRT's uhttpd web-server is like that.
select() and poll() are available on all Unices.
The better scaling O(1) versions are epoll on Linux and kqueue on BSDs. Less portable, though. But you can install libkqueue0 on Debian Linux.
Many programs use other approaches. For instance sshd, the SSH daemon, spawns a child process for each connection. Others handle each connection in a thread.
| {
"pile_set_name": "StackExchange"
} |
Q:
Criar arquivo excel com c#
Estou com um problema: tenho que gerar um classe no C# para criar um arquivo Excel, porém não estou conseguindo.
Alguém consegue me ajudar?
A ideia é gerar um arquivo em Excel com um menu e uma linha com valores.
A:
A melhor ferramenta que conheço para fazer o que você precisa é o EPPlus. Há várias respostas minhas no site a respeito.
Vou fazer um pequeno tutorial pra você sobre como usar.
O mínimo do mínimo
using (var excelPackage = new ExcelPackage())
{
excelPackage.Workbook.Properties.Author = "Jaderson Pessoa";
excelPackage.Workbook.Properties.Title = "Meu Excel";
// Aqui você coloca a lógica que precisa escrever nas planilhas.
string path = @"C:\teste.xlsx";
File.WriteAllBytes(path, excelPackage.GetAsByteArray());
}
Adicionando uma planilha e escrevendo nela
using (var excelPackage = new ExcelPackage())
{
excelPackage.Workbook.Properties.Author = "Jaderson Pessoa";
excelPackage.Workbook.Properties.Title = "Meu Excel";
// Aqui simplesmente adiciono a planilha inicial
var sheet = excelPackage.Workbook.Worksheets.Add("Planilha 1");
sheet.Name = "Planilha 1";
// Títulos
var i = 1;
var titulos = new String[] { "Título Um", "Título Dois", "Título Três" };
foreach (var titulo in titulos)
{
sheet.Cells[1, i++].Value = titulo;
}
// Valores
i = 1;
var valores = new String[] { "1", "2", "3" };
foreach (var valor in valores)
{
// Aqui escrevo a segunda linha do arquivo com alguns valores.
sheet.Cells[2, i++].Value = valor;
}
string path = @"C:\teste.xlsx";
File.WriteAllBytes(path, excelPackage.GetAsByteArray());
}
Acho que isso é um bom começo para sua classe C#.
| {
"pile_set_name": "StackExchange"
} |
Q:
make http call from within another http call in angular
I call an http service, loop through the results and each item acts as the key for another http-call.
What's the best way to do that?
I guess calling $http from within another $http call does not work as the outer $http-loop may just exit before the inner $http calls
have finished?
// pseudo code:
$http.get(url).then((response) ->
foreach response.data as item
$http.get(item.url).then((response) ->
foreach response.data as item
)
return result
)
Doesn't really work, right?
A:
This works fine, but you need to collect the sub-responses in an array or object and return $q.all(sub-responses) to ensure the caller waits for them to resolve.
Sticking to your pseudo-code I think it looks like:
$http.get(url).then((response) ->
var r = [];
foreach response.data as item
r.push($http.get(item.url))
return $q.all(r).then((subs) -> foreach ...)
)
| {
"pile_set_name": "StackExchange"
} |
Q:
Cassandra read performance
I want to know how to configure Cassandra to get better READ performances, because when I try to do a SELECT query on a table which has 1M rows I get the timedoutexception.
I've already change the request_timeout_in_ms, add more nodes but still got the same error.
A:
You are querying too many rows at once. You need to query less rows at a time and page through them.
Update:
First query:
select <KEY>,p0001 from eExtension limit 1000;
Repeat:
take the last result from that query:
select <KEY>,p0001 from eExtension where token(<KEY>) > token(<LAST KEY RETURNED FROM PREVIOUS>) limit 1000;
repeat that pattern until done.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to allow variables to be used in a csh script called from another csh script?
I have a script caller.csh in which I call another one called.sh. I declare some variables using set command in caller.csh (e.g. set alpha = 10). How do I use them in called.sh (e.g. echo $alpha) without passing them as command line parameters?
Note that the script called.sh can be run standalone and the variables are defined in it also. I would like these variable values to be overridden by those in caller.sh. So I'll use something like if not defined $alpha, then set alpha = in called.sh.
A:
One way is to set them as environment variables (using setenv).
However, this is not a good approach. The interface for passing variables to a shell script should be via the command-line options/arguments. If it's a shell script, it should be standalone-ish. How else would you pass the same arguments when running it directly from command line?
EDIT: following the update to the second part of your question. Use getopt. You can set the variables your default values, and then use getopt to override those values in case the user passes them as command-line options/arguments. This approach is preferable to any kind of globally-defined variables.
| {
"pile_set_name": "StackExchange"
} |
Q:
Trouble in linking the libIEC61850 library with GTK3+ for C
I'm having trouble combining compilations and linkages for two separate libraries. Following is a makefile for one of the libraries (libIEC61850):
LIBIEC_HOME=../../iec61850/libiec61850-1.4.0/
#Add this somehow:
#cc `pkg-config --cflags gtk+-3.0` main.c -o a.out `pkg-config --libs gtk+-3.0`
PROJECT_BINARY_NAME = a.out
PROJECT_SOURCES += main.c
INCLUDES += -I.
include $(LIBIEC_HOME)/make/target_system.mk
include $(LIBIEC_HOME)/make/stack_includes.mk
all: $(PROJECT_BINARY_NAME)
include $(LIBIEC_HOME)/make/common_targets.mk
$(PROJECT_BINARY_NAME): $(PROJECT_SOURCES) $(LIB_NAME)
$(CC) $(CFLAGS) $(LDFLAGS) -o $(PROJECT_BINARY_NAME) $(PROJECT_SOURCES) $(INCLUDES) $(LIB_NAME) $(LDLIBS)
clean:
rm -f $(PROJECT_BINARY_NAME)
I would like to add the gtk-3.0 library in the compilation of main.c. To execute a typical gtk3 function, I simply write:
$ cc `pkg-config --cflags gtk+-3.0` main.c -o a.out `pkg-config --libs gtk+-3.0`
and the executable is generated without problem. How can I combine these two?
A:
Fixed. Following is a Makefile to combine the libIEC61850 and GTK+ 3.0 libraries:
# Makefile to combine the libIEC61850 and GTK+-3.0 Libraries
# path to libIEC61850:
LIBIEC_HOME=../../iec61850/libiec61850-1.4.0/
#Add this somehow:
#cc `pkg-config --cflags gtk+-3.0` main.c -o a.out `pkg-config --libs gtk+-3.0`
PROJECT_BINARY_NAME = a.out
PROJECT_SOURCES += main.c
include $(LIBIEC_HOME)/make/target_system.mk
include $(LIBIEC_HOME)/make/stack_includes.mk
all: $(PROJECT_BINARY_NAME)
include $(LIBIEC_HOME)/make/common_targets.mk
$(PROJECT_BINARY_NAME): $(PROJECT_SOURCES) $(LIB_NAME)
$(CC) $(CFLAGS) $(shell pkg-config --cflags gtk+-3.0) $(LDFLAGS) \
-o $(PROJECT_BINARY_NAME) $(PROJECT_SOURCES) $(INCLUDES) \
$(LIB_NAME) $(LDLIBS) $(shell pkg-config --libs gtk+-3.0)
clean:
rm -f $(PROJECT_BINARY_NAME)
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript loads wrong on specific page
This will be a newbie question, but i hope that there is a gentle soul which knows how to handle the following problem.
I got two almost identical pages - except for very small changes on each page. Nothing i can see that should affect how the page is handled. On one page, everythings loads as it should. On the other page, 1 out of 4 times, the js does not load at all.
The pages are in Wordpress.
I have tried everything i know. The default.js are identical for the page, the function.php are identical for the page. I simple do not know how to figure out what is wrong and how to debug it. As the code is identical, i see no reason to post the code - as it works on one page, but not the other. So to my knowledge, something else must be wrong, but i have no idea what.
The two pages are http://dev.ateo.dk/kollekolle/ and http://copy.ateo.dk/kollekolle/ . copy.ateo.dk is the one that is not working The js is loaded if the tabs works. Try re-fresh to encounter the problem.
Cheers.
A:
Problems like the one you describe are often hard to track down, and can be specific to certain browsers, or even specific versions of browsers. It's why websites like www.caniuse.com are so useful.
Specifically about your problem: I don't think the following 2 errors help.
Uncaught ReferenceError: $ is not defined dev.ateo.dk/wp-content/themes/ateo/js/default.js:5
Uncaught ReferenceError: jQuery is not defined dev.ateo.dk/wp-content/themes/ateo/js/jquery.cookie.js:17
It appears your jQuery isn't loaded in properly in some (or all) cases. When you fix that your problem might also go away.
P.S. This error comes from the Developer Console on the latest version of Chrome.
EDIT: It seems that jQuery is not loaded in yet by the time your default.js starts to execute. Try and find where both jQuery and your default.js are supposedly loaded in, and see if you can make sure that it happens in the right order (for example, placing a <script src="..."></script> first for jQuery and directly below that for your default.js should make sure your default.js cannot be loaded in before jQuery.
An extra issue could be if your default.js starts execution before the DOM is finished loading. In that case things like event handlers (for your tabs for example) might not be attached as expected (since the nodes to attach them to don't exist yet at the time of execution). This can be subtle, as it won't show any errors to indicate this is happening. Make sure you wait for a $().load() or equivalent before you start manipulating things script-wise.
| {
"pile_set_name": "StackExchange"
} |
Q:
Skip lines in std::istream
I'm using std::getline() to read lines from an std::istream-derived class, how can I move forward a few lines?
Do I have to just read and discard them?
A:
No, you don't have to use getline
The more efficient way is ignoring strings with std::istream::ignore
for (int currLineNumber = 0; currLineNumber < startLineNumber; ++currLineNumber){
if (addressesFile.ignore(numeric_limits<streamsize>::max(), addressesFile.widen('\n'))){
//just skipping the line
} else
return HandleReadingLineError(addressesFile, currLineNumber);
}
HandleReadingLineError is not standart but hand-made, of course.
The first parameter is maximum number of characters to extract. If this is exactly numeric_limits::max(), there is no limit:
Link at cplusplus.com: std::istream::ignore
If you are going to skip a lot of lines you definitely should use it instead of getline: when i needed to skip 100000 lines in my file it took about a second in opposite to 22 seconds with getline.
A:
Edit: You can also use std::istream::ignore, see https://stackoverflow.com/a/25012566/492336
Do I have to use getline the number of lines I want to skip?
No, but it's probably going to be the clearest solution to those reading your code. If the number of lines you're skipping is large, you can improve performance by reading large blocks and counting newlines in each block, stopping and repositioning the file to the last newline's location. But unless you are having performance problems, I'd just put getline in a loop for the number of lines you want to skip.
A:
Yes use std::getline unless you know the location of the newlines.
If for some strange reason you happen to know the location of where the newlines appear then you can use ifstream::seekg first.
You can read in other ways such as ifstream::read but std::getline is probably the easiest and most clear solution.
| {
"pile_set_name": "StackExchange"
} |
Q:
Undoing latest push to remote branch
I accidentally committed and pushed my code changes to the wrong branch.
Here is what I have done to undo my bad changes
git log : find out where I need to go back to
git reset --hard 3cd4e57dcbb2a5bae350086c11d64c2f01ad4546
git push -f origin 3cd4e57dcbb2a5bae350086c11d64c2f01ad4546:develop
but I get an error
! [remote rejected] 3cd4e57dcbb2a5bae350086c11d64c2f01ad4546 -> develop (protected branch hook declined)
How do I undo on the remote as well? I guess git reset --hard is only local not remote.
A:
It seems the remote branch is protected.
To make the force-push work,
you can temporarily unprotect the branch, push, and protect again.
You can do this on the Settings / Branches tab of your repository's page on GitHub.
Note that this (and force-pushing in general) is not a recommended operation on public repositories.
Alternatively,
you can undo the commit by reverting it,
which will generate a new commit after the bad one,
and then simply push.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can Hadoop be restricted to spare CPU cycles?
Is it possible to run Hadoop so that it only uses spare CPU cycles? I.e. would it be feasible to install Hadoop on peoples work machines so that number crunching can be done when they are not using their PCs, and they wouldn't experience an obvious performance drain (whurring fans aside!).
Perhaps it's just be a case of setting the JVM to run at a low priority and not use 'too much' network (assuming such a thing is possible on a windows machine)?
If not, does anyone know of any Java equivalents to things like BOINC?
Edit: Found a list of Cycle Scavenging Infrastructure here. Although my question about Hadoop still stands.
A:
This is very much outside the intended usage for Hadoop. Hadoop expects all of its nodes to be fully available and networked for optimal throughput -- not something you get with workstations. Furthermore, it doesn't even really run in Windows (you can use it with cygwin, but I don't know anyone using that for "production" -- except as client machines issuing jobs).
Hadoop does things like store data chunks on a few of the nodes, and try to schedule all computation on that data on those nodes; in a work-sharing environment, that means a task that needs this data will want to run on those three workstations -- regardless of what their users are doing at the moment. In contrast, "cycle scavenging" projects keep all the data elsewhere, and ship it and a task to any node that's available at a given moment; this enables them to be nicer to the machines, but it incurs obvious data transfer costs.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to pass a method's nested procedure as a parameter?
Given a TForm with a TListBox on it, the following works:
procedure TForm1.FormCreate(Sender: TObject);
procedure _WorkOnListBox;
begin
ListBox.Items.Append('Test');
end;
begin
_WorkOnListBox;
end;
As does the following:
procedure TForm1.DoWithoutListBoxEvents(AProc: TProc);
begin
ListBox.Items.BeginUpdate;
try
AProc;
finally
ListBox.Items.EndUpdate;
end;
end;
procedure TForm1.FormCreate(Sender: TObject);
begin
DoWithoutListBoxEvents(procedure
begin
LayersListBox.Items.Append('Test');
end);
end;
But the following does not:
procedure TForm1.FormCreate(Sender: TObject);
procedure _WorkOnListBox;
begin
ListBox.Items.Append('Test');
end;
begin
DoWithoutListBoxEvents(_WorkOnListBox);
end;
I get an E2555 Cannot capture symbol '_WorkOnListBox'. Why? Is there any way to get the DoWithoutListBoxEvents to work without using an anonymous procedure? Although I think it looks elegant with it, I'm trying to stay FPC compatible.
A:
DoWithoutEvents() takes a TProc as input:
type
TProc = procedure;
Only a standalone non-class procedure and an anonymous procedure can be assigned to a TProc. _WorkOnForm is neither of those, it is a local procedure instead. A local procedure has special compiler handling that ties it to its parent's stack frame. Thus, _WorkOnForm is not compatible with TProc.
| {
"pile_set_name": "StackExchange"
} |
Q:
Load HTML - Command line
I have a PHP script that dynamically creates a HTML file. In command line, I would like to load all elements in the HTML file.
So let's say the HTML file has these elements:
img src="http://www.test.com/image.php" ...
iframe name="xxx" src="https://www.abc.com" ...
I would like the Web servers test.com and abc.com to actually receive my request.
Is there a way to do that in command line?
What I tried so far is to make my HTML accessible via my local Web server and fetch the file with "wget --mirror", but no success.
Thank you for your help.
A:
wget --mirror is definitely the way to go.
To make sure it loads the external references, add --page-requisites:
This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets.
| {
"pile_set_name": "StackExchange"
} |
Q:
Creating sparse matrix from a data frame in R
Given a dataframe that has multiple objects mapped to the same person:
mike toy
mike golf
mike swim
mike call
tom eat
tom sleep
nate eat
how can i generate a matrix that has the counts of each of the elements and NA in the rest?:
toy golf swim call eat sleep
mike 1 1 1 1 NA NA
tom NA NA NA NA 1 1
nate NA NA NA NA 1 NA
A:
> df
V1 V2
1 mike toy
2 mike golf
3 mike swim
4 mike call
5 tom eat
6 tom sleep
7 nate eat
> table(df)
V2
V1 call eat golf sleep swim toy
mike 1 0 1 0 1 1
nate 0 1 0 0 0 0
tom 0 1 0 1 0 0
| {
"pile_set_name": "StackExchange"
} |
Q:
ggplot apply different scaling to parts of axis for continuous variable
I'm trying to make an age distribution comparison between groups in ggplot in R and what I'd like to do is emphasize specific age groups by making that part of the axis wider:
Here, I'd like to expand the 0, 6, 12, 24, 36 range to be wider, but reduce the length of the 60, 120 etc tail so it's not as long.
Code I'm using:
require(ggplot2)
p <- ggplot(age.df, aes(x=group, y=Age.cont, fill=group)) +
geom_violin(trim=TRUE) + geom_boxplot(width=0.05, colour = "#FFFFFF",
outlier.shape = NA) +
scale_fill_manual(values=rev(c("#83141F", "#1F427D", "#036A3C"))) +
scale_colour_manual(values="#FFFFFF") + theme_classic() +
xlab("") + ylab("Age (months)") + scale_y_continuous(breaks = c(0, 6, 12, 24,
36, 48, 60, 120))
p + coord_flip()
Any help is appreciated
A:
You could use a square root transform. You don't provide data, so I created my own data frame for this example.
# Dummy data frame
df <- data.frame(values = runif(100, 0, 120),
groups = c(rep("A", 50), rep("B", 50)))
# Create violin plot with square root transform
ggplot(df, aes(x = groups, y = values)) +
geom_violin() +
coord_flip() +
scale_y_sqrt(breaks = c(0, 6, 12, 24, 36, 48, 60, 120))
| {
"pile_set_name": "StackExchange"
} |
Q:
How to integrate Payseal in ASPNET
Anyone, plz, provide me the details of integrating payseal(ICICI) in my website.. They hav given some testing code.. But, I couldn't understand it.. If I get any code sample, it will be useful for me.
Thanks in Advance
A:
Ok..I found the solution.. Just add the dll which is provided by them. They hav given the sample test pages. In that, run the aspx file, "testssl.aspx" by changing the function setmerchantdetails with your information like merchant id,response page etc., like
objMerchant.setMerchantDetails("00001212", "00001212", "00001212", "", transactionid, "Orderno", "http://localhost/SFAClient/SFAResponse.aspx", "POST", "INR", "INVoiceno", "req.Preauthorization", "1550.00", "GMT+05:30", "Ext1", "Ext2", "Ext3", "Ext4", "Ext5");
That will take u to the icici payseal page. From that the customer hav to response. After finishing the payment, it will directly return to the response page designed by you(http://localhost/SFAClient/SFAResponse.aspx).
Hope, it will help others!!!!
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use getCurrentSession in Hibernate?
Configuration hibernate
Properties prop= new Properties();
prop.setProperty("hibernate.connection.driver_class", "com.mysql.jdbc.Driver");
prop.setProperty("hibernate.connection.url", url);
prop.setProperty("hibernate.connection.username", user);
prop.setProperty("hibernate.connection.password", password);
concreteSessionFactory = new AnnotationConfiguration()
.addPackage("Main.*")
.addProperties(prop)
.addAnnotatedClass(DeviceDataSet.class)
.buildSessionFactory();
public SessionFactory sessionFactory() {
return concreteSessionFactory;
}
I use
public long insertNewJournalNote(JournalDataSet journalDataSet) {
Session session = sessionFactory.openSession();
MySqlDAO dao = new MySqlDAO(session);
return dao.insertNewJournalNote(journalDataSet);
}
With getCurrentSession, I get:
Exception in thread "main" org.hibernate.HibernateException: No CurrentSessionContext configured
A:
Maybe this line could solve your problem:
prop.setProperty("hibernate.current_session_context_class", thread);
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it necessary to use super().__init__() in this case?
EDIT:
Oh, sorry guys. This question is a duplicate one, but not the dup of the linked one. I've found what i need in this question, maybe i should try more key word to search next time.Subclassing dict: should dict.init() be called?
In my case, I implement update, __setitem__ in the class StrKeyDict(dict), and __new__ inherited from dict may create a empty dict to ensure update can work, I don't think it's necessary to use super().__init__() again.
The code is from Fluent Python
example-code/attic/dicts/strkeydict_dictsub.py
import collections.abc
class StrKeyDict(dict):
def __init__(self, iterable=None, **kwds):
super().__init__()
self.update(iterable, **kwds)
def __missing__(self, key):
if isinstance(key, str):
raise KeyError(key)
return self[str(key)]
def __contains__(self, key):
return key in self.keys() or str(key) in self.keys()
def __setitem__(self, key, item):
super().__setitem__(str(key), item)
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def update(self, iterable=None, **kwds):
if iterable is not None:
if isinstance(iterable, collections.abc.Mapping):
pairs = iterable.items()
else:
pairs = ((k, v) for k, v in iterable)
for key, value in pairs:
self[key] = value
if kwds:
self.update(kwds)
When we use
d = StrKeyDict(a=1,b=2)
for example, to create instance d, the real happening is:
1.Call __new__ which inherited from the superclass dict to create a empty dict instance
2.Call __init__ to initialize the instance
Just like I said, I implement update, __setitem__ in the class StrKeyDict(dict). So is it necessary to use super().__init__() here.
Thank you!
A:
The superclass' __init__() might or might not be doing something necessary to initialise it. If you know what's the case you can make an informed decision. If you don't know your best bet is to call it just in case.
Now if you don't call it because it is not necessary and the implementation of the base class changes in a way that make it necessary then you will have to fix it.
On the other hand there are a few cases where calling the superclass __init__() is a bad idea, for example if it does heavy calculations very specific to the superclass and that are different in the subclass.
FWIW my approach is always call super().__init__() unless I have a good reason not to call it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why Aptana does not display classes information next to the python files in Navigator?
I'm using Aptana Studio 3 and Python 2.7.3. I have a project and I know that I can make Aptana show me information about the classes in each *.py file, but I do not know how to do it. ( i.e. there should be "plus" sign next to each file name and when I click it should expand and show me information about all the classes/methods/etc in the file )
Can someone tell me the steps?
P.P.: I've tried to google the information, but nothing came out : /
A:
Thanks @Sarah Kemp
To view the information in the files go to Window > Show View > Other > PyDev > PyDev Package Explorer
| {
"pile_set_name": "StackExchange"
} |
Q:
Running multiple functions in a foreach loop
I have a large Powershell script that checks multiple variables on VMs. The script consists of about 80 different functions that are named question1, question2, question3...
At first none of the functions needed parameters, so this code worked.
$number_of_questions = 1..75
foreach($num in $number_of_questions){
Invoke-Expression question$num
}
It iterates thru every question
But now i need to add parameters for when i run the functions. And that doesn't work. And i cant find a way to get it to work with arguments
Here´s a testversion of what im trying to do.
function test1($text){
Write-host "Not argument"
Write-host $text
}
function test2($text){
Write-host "Not argument"
Write-host $text
}
function test3($text){
Write-host "Not argument"
Write-host $text
}
function test4($text){
Write-host "Not argument"
Write-host $text
}
function test5($text){
Write-host "Not argument"
Write-host $text
}
$num = 1..5
foreach($number in $num){
Invoke-Expression test$number -text "Argument"
}
Does anyone have a solution for running multiple functions with sequenced names that uses parameters.
A:
Just replace:
Invoke-Expression test$number -text "Argument"
with:
Invoke-Expression "test$number -text `"Argument`""
to make it work.
| {
"pile_set_name": "StackExchange"
} |
Q:
Check whether a particular geo location (latitude and longitude) belongs to 'New York' or not?
I want to find out whether a particular geo location belongs to the 'New York, US' or not to show different content based on the location. I just have the corresponding location's latitude and longitude details, does anybody knows a solution to handle this scenario.
A:
Working demo
using javascript and jquery:- Working demo - just press 'run' at the top of the page.
Yahoo's GEO API
I did something similar to this a while back using yahoo's GEO API. You can look up the locality of a specific lattitude and longitude with the following YQL query:-
select locality1 from geo.places where text="40.714623,-74.006605"
You can see the XML that is returned in the YQL console here
To get this XML from your javascript/php code you can pass the query as a GET string like:-
http://query.yahooapis.com/v1/public/yql?q=[url encoded query here]
This will return just the XML which you can parse using jquery's parseXML() method
Example Jquery code
Here is some example javascript to do what you're after:-
// Lat and long for which we want to determine if in NY or not
var lat = '40.714623';
var long = '-74.006605';
// Get xml fromyahoo api
$.get('http://query.yahooapis.com/v1/public/yql', {q: 'select locality1 from geo.places where text="' + lat + ',' + long + '"'}, function(data) {
// Jquery's get will automatically detect that it is XML and parse it
// so here we create a wrapped set of the xml using $() so we can use
// the usual jquery selecters to find what we want
$xml = $(data);
// Simply use jquery's find to find 'locality1' which contains the city name
$city = $xml.find("locality1").first();
// See if we're in new york
if ($city.text() == 'New York')
alert(lat + ',' + long + ' is in new york');
else
alert(lat + ',' + long + ' is NOT in new york');
});
| {
"pile_set_name": "StackExchange"
} |
Q:
FreeBSD rc.d script doesn't start as a daemon
I have developed the following script at location /usr/local/etc/rc.d/bluesky
#!/bin/sh
# PROVIDE: bluesky
# REQUIRE: mysql sshd
# BEFORE:
# KEYWORD:
. /etc/rc.subr
name="bluesky"
rcvar=bluesky_enable
start_cmd="${name}_start"
stop_cmd=":"
load_rc_config $name
: ${bluesky_enable:=no}
: ${bluesky_msg="HTTP server starts ..."}
bluesky_start(){
echo $PATH
export PATH=$PATH:/usr/local/bin/
echo $PATH
### Run Node server ###
/usr/local/bin/node /usr/home/ict/Documents/bluesky/server.js
echo "$bluesky_msg"
}
run_rc_command "$1"
I also have enabled it on my /etc/rc.conf file:
bluesky_enable="YES"
When I reboot the server, the script works fine and starts the HTTP server at port 80. The only problem is that the script won't be sent to background or won't be started as a daemon. I wonder how I can run script at boot time in the background or as a daemon.
A:
The RC script itself is not intended to daemonize, but is expected to start and stop the daemon.
If your service does not have an option to start as a daemon, you can use daemon(8) to manage that part.
An example:
#!/bin/sh
# PROVIDE: ...
# REQUIRE: ...
. /etc/rc.subr
name="..."
rcvar=${name}_enable
pidfile="/var/run/${name}.pid"
command="/usr/sbin/daemon"
command_args="-c -f -P ${pidfile} -r /usr/local/libexec/${name}"
load_rc_config $name
run_rc_command "$1"
| {
"pile_set_name": "StackExchange"
} |
Q:
Minimization problem with infinitely many variables and linear constraints
How can the following minimization problem be solved?
$$\begin{array}{ll} \text{minimize} & \displaystyle\sum_{i=1}^{\infty}P_i^3\\ \text{subject to} & \displaystyle\sum_{i=1}^{\infty} P_i = 1\\ & P_i \geqslant 0 \quad\text{for} \quad i \in \mathbb N^+\end{array}$$
I guess the Lagrange multipliers and Karush–Kuhn–Tucker conditions won't work for infinitely many variables. Any hints on how to approach the problem will really be appreciated.
A:
Generally, a major part of such problems is establishing that
a solution exists.
The problem doesn't have a solution in the usual sense.
However, $\inf \{ \sum_k p_k^3 | \sum_k p_k =1, p_k \ge 0 \} = 0$.
To see this, note that the cost always non negative, and taking $p_1=\cdots = p_n = {1 \over n}$ and $p_k = 0$ for $k >n$, we have
$\sum_k p_k^3 = {1 \over n^2}$. Hence the $\inf$ is zero.
| {
"pile_set_name": "StackExchange"
} |
Q:
Taking Web screenshot using imgkit
I was trying to take screenshots using imgkit as follows,
options = {
'width': 1000,
'height': 1000
}
imgkit.from_url('https://www.ozbargain.com.au/', 'out1.jpg', options=options)
What I am getting is
The actual look is a bit different. Possibly this is due to javascript not being executed [its a guess]. Could you please tell me a way how can I do this with imgkit. Any suggested library would be helpful too.
A:
You could use Selenium to control web browser Chrome or Firefox which can run JavaScript and browser has function to take screenshot. But JavaScript may display windows with messages which you may have to close using click() in code - but you would have to find manually (in DevTools in browser) class name, id or other values which helps Selenium to recognize button on page.
from selenium import webdriver
from time import sleep
#driver = webdriver.Firefox()
driver = webdriver.Chrome()
driver.get('https://www.ozbargain.com.au/')
driver.set_window_size(1000, 1000)
sleep(2)
# close first message
driver.find_element_by_class_name('qc-cmp-button').click()
sleep(1)
# close second message with details
driver.find_element_by_class_name('qc-cmp-button.qc-cmp-save-and-exit').click()
sleep(1)
driver.get_screenshot_as_file("screenshot.png")
#driver.quit()
Eventually you could use PyAutoGUI or mss to take screenshot of full desktop or some region on desktop.
| {
"pile_set_name": "StackExchange"
} |
Q:
Excel Message During Code(C#)
When my code is running I have a messagebox popup from excel that has a retry I just want my code to be able to simulate hitting the enter key if it pops up which it does everytime. This is the Messagebox that come up and freezes my code.
I turned off one of them using the OLE message, but I still get this messagebox any way to simulate a click on retry or enter key,?
//Turn off OLE Error Message
oXL.DisplayAlerts = false;
A:
InputSimulator is a very flexible (and reliable) wrapper that is capable of simulating keyboard and mouse events.
It wraps SendInput under the hood but abstracts away all the PInvoke calls and other complexity. It's a drop in DLL that (for your situation) should only take one line of code.
InputSimulator.SimulateKeyPress(VirtualKeyCode.ENTER);
| {
"pile_set_name": "StackExchange"
} |
Q:
jQuery cannot capture href in Bonitasoft
I have some issue regarding jQuery in Bonitasoft. I called href in jQuery code but it is undefined when it run on browser. But, if I called the href in console, it displayed the download link.
Here is the code.
<script type="text/javascript">
$(document).ready(function()
{
var attachLeave = function()
{
console.log('kucingku comel')
var leave1 = $('#leaveType1').find('select').val();
var btn = $('#Submit1').find('button');
console.log (leave1);
if(leave1 == "Medical Leave"){
btn.attr('disabled', 'disabled');
btn.attr('style', 'color:black');
}
else{
btn.removeAttr('disabled');
btn.attr('style', 'color:white');
}
}
var attachmentUpload = function()
{
console.log('enter attachment validation');
var UploadEx = $('#File1').find('.bonita_file_upload').val();
var uploadFile = $('#File1').find('.bonita_download_link');
var btn = $('#Submit1').find('button');
uploadFile.attr('href');
console.log('download-link ' + uploadFile.attr('href'));
if(uploadFile.attr('href') != "null")
{
console.log('Ready to download');
btn.removeAttr('disabled');
btn.attr('style', 'color:white');
}
else
{
console.log("Download is blank");
btn.attr('disabled', 'disabled');
btn.attr('style', 'color:black');
}
}
console.log('ayam goreng');
//$('#Submit1').find('button').attr('disabled', 'disabled');
var leave1 = $('#leaveType1').find('select');
leave1.change(attachLeave);
//attach file or URL to attachment
var uploadExcuse = $('#File1').find('.bonita_file_upload');
uploadExcuse.on('change', attachmentUpload);
});
</script>
Note that I have called the href in the jQuery code, but it is still undefined.
I hope anyone here can help me. Thanks
A:
do it something like this:
attachmentUpload = function(e) {
e.preventDefault();
// Holds all the file related data
var Files = e.target.files[0];
console.log( Files );
// Do what you want
}
Description:
e.target.files return array of files data like file type, file name etc. By default form upload only allow single file upload, It means e.target.files has only single file,
But if you add multiple attribute in file type element then on change event e.target.files returns all the selected files data in array like
// Depends on your selected files
// And better way is use for loop for multi file upload
e.target.files[0],
e.target.files[1],
.............
| {
"pile_set_name": "StackExchange"
} |
Q:
Were there (or are there) any Dark Wizards who were Sorted into the Hufflepuff House?
I just finished watching Harry Potter and the Sorcerer's (Philosopher's) Stone on DVD. Not my first time seeing in the movie by a long shot, but Quirrell captured my attention during the film. He was a servant of Lord Voldemort I have heard on Pottermore that Quinius Quirrell was a Ravenclaw, and that Hufflepuff has had the fewest Dark wizards, so I started thinking of some famous Dark Witches and Wizards each of the four Houses may have.
Slytherin had Lord Voldemort (most famously, along with many Death Eaters)
Gryffindor had Wormtail; Ravenclaw had Quirrell.
I have tried to find some Dark Witches or Wizards from Hufflepuff, but only in vain.
Are there any current Dark Wizards (most likely in Azkaban) or former Dark Wizards who are known to have been a part of Hufflepuff House?
A:
The Hufflepuff welcome letter on Pottermore makes a claim about their Dark wizard turnout:
However, it’s true that Hufflepuff is a bit lacking in one area. We’ve produced the fewest Dark wizards of any house in this school. Of course, you’d expect Slytherin to churn out evil-doers, seeing as they’ve never heard of fair play and prefer cheating over hard work any day, but even Gryffindor (the house we get on best with) has produced a few dodgy characters.
Of course, take that with a pinch of salt, since it comes from a Hufflepuff prefect.
That it says “the fewest” might suggest that they have turned out a non-zero number of Dark wizards/witches in their history, but less than the other houses (so the answer to the original question would be yes). But I don’t know of any canon examples that back up that interpretation.
It’s also possible that this Prefect doesn’t know of any instances of Dark wizards in Hufflepuff, but doesn’t want to say “no Dark wizards” in case there are some that they don’t know about.
Comments in the Slytherin welcome letter also hint at Dark wizards across all three houses, but again fails to cite specific instances:
I’m not denying that we’ve produced our share of Dark wizards, but so have the other three houses – they just don’t like admitting it.
But again, that comes with the bias of being written by a Slytherin prefect.
Overall, I’d say it’s pretty likely that Hufflepuff have turned out some Dark wizards and witches, but they probably pale in comparison to the number and notoriety of those turned out by Slytherin, hence they don’t tend to get mentioned much.
A:
Harry Potter Wiki suggests that Hufflepuff has the fewest dark wizards of any house, but I have found several accounts that suggest that they have none.
So either they have none, or they have a few very-unknown ones as at least one Slytherin claims.
A:
In an alternate timeline, Cedric Diggory, a prominent Hufflepuff alumnus, suffers a deep humiliation during the Triwizard Tournament and ultimately becomes a dark wizard and a Death Eater.
SCORPIUS: He wasn’t supposed to do it alone. Cedric was supposed to win it with him. But we humiliated him out of the tournament. And
as a result of that humiliation he became a Death Eater. I can’t work
out what he did in the Battle of Hogwarts — whether he killed someone
or — but he did something and it changed everything.
SNAPE: Cedric Diggory killed only one wizard and not a significant one — Neville Longbottom.
SCORPIUS: Oh, of course, that’s it! Professor Longbottom was supposed to kill Nagini, Voldemort’s snake. Nagini had to die before
Voldemort could die. That’s it! You’ve solved it! We destroyed Cedric,
he killed Neville, Voldemort won the battle. Can you see? Can you see
it?
Harry Potter and the Cursed Child - ACT THREE, SCENE FIVE
| {
"pile_set_name": "StackExchange"
} |
Q:
Excel VBA: Using variable in place of item in quotation marks
I am trying to copy the contents of a cell in a different workbook, but would like the user to be able to specify which cell to start in by typing the name of the cell into a cell in the current workbook (that the macro code is in).
(By the way, please excuse the elementary nature of this question as well as any obvious mistakes I am making vocabulary-wise; I am new at trying my hand at this!)
I have come up with the following code, but am getting a "Run-time error '91': Object variable or With block variable not set" message.
(Please note that I have also used user input to refer to the different workbook. That part worked.)
Sub OpenWorkBook()
Dim Src As Workbook
Set Src = Workbooks.Open(Range("B3"))
Dim StrtCell As String
StrtCell = Range("B4")
Src.Sheets("Sheet1").Range(StrtCell).Copy
ThisWorkbook.Activate
Range("A6").PasteSpecial
End Sub
Any assistance is greatly appreciated!
A:
It's best to always qualify your Range() calls with an explicit worksheet object, otherwise they will use whatever happens to be the Activesheet at the time.
Relying on some specific sheet being active when a particular line runs makes your code brittle and difficult to debug.
Sub OpenWorkBook()
Dim Src As Workbook
Dim StrtCell As String
Dim sht as Worksheet
Set sht = Activesheet
Set Src = Workbooks.Open(sht.Range("B3"))
StrtCell = sht.Range("B4")
Src.Sheets("Sheet1").Range(StrtCell).Copy
ThisWorkbook.Activate
sht.Range("A6").PasteSpecial
End Sub
| {
"pile_set_name": "StackExchange"
} |
Q:
Looping through files in order with Dir()
I am attempting to insert several pictures into an excel spreadsheet, and the save it as a PDF. I have been able to figure out how to space the pictures and iterate through all the pictures in a folder, but I can't seem to figure out how to iterate through the pictures in order.
I have found that I can iterate through the .jpg files in a specific folder using Dir as seen in this question: Loop through files in a folder using VBA? and this question macro - open all files in a folder. It has worked wonders, but I need to iterate through the pictures in order. The pictures are labeled "PHOTOMICS0" with that final number increasing.
Here is what I am working with.
counter = 1
MyFile = Dir(MyFolder & "\*.jpg")
Do While MyFile <> vbNullString
incr = 43 * counter
Cells(incr, 1).Activate
ws1.Pictures.Insert(MyFolder & "\" & MyFile).Select
MyFile = Dir
counter = counter + 1
Loop
So far, MyFile has gone from "PHOTOMICS0" to "PHOTOMICS4", 9, 10, 7, 2, 3, 8, 6, 5, and finally 1. When repeated it follows the same order. How can I increment through these in numerical order?
A:
Thanks to the advice of cybernetic.nomad and Siddharth Rout I was able to fix this.
I used some functions and lines of codes from these posts:
How to find numbers from a string?
How to sort an array of strings containing numbers
Here is the functioning code:
counter = 0
MyFile = Dir(MyFolder & "\*.jpg")
Do While MyFile <> vbNullString
ReDim Preserve PMArray(counter)
PMArray(counter) = MyFile
MyFile = Dir
counter = counter + 1
Loop
Call BubbleSort(PMArray)
b = counter - 1
For j = 0 To b
a = j + 1
If i > 24 Then a = j + 2
incr = 43 * a
Cells(incr, 1).Activate
ws1.Pictures.Insert(MyFolder & "\" & PMArray(j)).Select
Next j
Where BubbleSort and the associated function used in BubbleSort are:
Sub BubbleSort(arr)
Dim strTemp As String
Dim i As Long
Dim j As Long
Dim lngMin As Long
Dim lngMax As Long
lngMin = LBound(arr)
lngMax = UBound(arr)
For i = lngMin To lngMax - 1
For j = i + 1 To lngMax
If onlyDigits(arr(i)) > onlyDigits(arr(j)) Then
strTemp = arr(i)
arr(i) = arr(j)
arr(j) = strTemp
End If
Next j
Next i
End Sub
Function onlyDigits(s) As Integer
' Variables needed (remember to use "option explicit"). '
Dim retval As String ' This is the return string. '
Dim retvalint As Integer
Dim i As Integer ' Counter for character position. '
' Initialise return string to empty '
retval = ""
' For every character in input string, copy digits to '
' return string. '
For i = 1 To Len(s)
If Mid(s, i, 1) >= "0" And Mid(s, i, 1) <= "9" Then
retval = retval + Mid(s, i, 1)
End If
Next
' Then return the return string. '
retvalint = CInt(retval)
onlyDigits = retvalint
End Function
| {
"pile_set_name": "StackExchange"
} |
Q:
How many ways can a word be formed from $8$ A's and $5$ B's if every A is next to another A and every B is next to another B?
How many ways can a word be formed from $8$ A's and $5$ B's if every A is next to another A and every B is next to another B?
Note: It doesn't have to be an actual legal word. At least I think so.
I can't do it with permutations or combinations, and I don't think listing these all out is a very good idea. Thanks in advance for posting a solution!
A:
Each $B$ occurs at some position in the word. Label these from left to right the first $B$, the second $B$, and so forth.
Now consider how every $B$ can be next to another $B$ when there are exactly five $B$s.
The first $B$ and second $B$ must be adjacent.
The fourth $B$ and fifth $B$ must be adjacent.
The third $B$ can be adjacent to the second $B$, or to the fourth $B$, or both.
So far this provides two patterns: $A^k BBB A^m BB A^n$ and
$A^k BB A^m BBB A^n$, where $A^r$ is $r$ repetitions of $A$ and
$k + m + n = 8$.
But since every $A$ must be adjacent to another, that rules out all
sequences in which $k = 1$ or $m = 1$ or $n = 1$.
This suggests the following counting:
Count all ways to distribute $8$ indistinguishable objects (the $A$s) among three distinguishable boxes (the spaces occupied by the sequences $A^k$, $A^m$, and $A^n$) if there must be at least two objects per box. Multiply this number by $2$ to count the variations $A^k BBB A^m BB A^n$ and
$A^k BB A^m BBB A^n$.
Count all the ways to distribute $8$ indistinguishable objects among two distinguishable boxes if there must be at least two objects per box. This corresponds to setting exactly one of $k, m, n$ to zero. Multiply this number by $5$ to count the variations $A^k BBB A^m BB A^n$ and
$A^k BB A^m BBB A^n$ where $k=0$ or $n=0$ and to count the single variation where $m=0$.
Count the four sequences in which all the $A$s are in one "box": $AAAAAAAABBBBB$, $BBAAAAAAAABBB$, $BBBAAAAAAAABB$, and $BBBBBAAAAAAAA$.
The total number of words is the sum of those three counts.
A:
The $B's$ can either come as a block of $5$ of as a block of $3$ and a disjoint block of $2$. Let's count these cases separately.
Case I: Block of $5$. Then the number of $A's$ in front must be $\{0,2,3,4,5,6,8\}$ so $\fbox 7$.
Case II. block of $2$ plus block of $3$. Let's say the $2$-block comes first (the other case has the same count). Then we have three places to put $A's$ and we need at least $2$ in the middle.
IIa. $2$ in the middle. Then We have $\{0,2,3,4,6\}$ in front so $\fbox 5$
IIb. $3$ in the middle. Then we have $\{0,2,3,5\}$ in front so $\fbox 4$.
IIc. $4$ in the middle. Then we have $\{0,2,4\}$ in front so $\fbox 3$.
IId. $5$ in the middle. Then we have $\{0,3\}$ in front so $\fbox 2$.
IIe. $6$ in the middle. Then we have $\{0,2\}$ in front so $\fbox 2$.
IIf. $8$ in the middle. Then $\fbox 1$.
Thus we have $5+4+3+2+2+1=17$. Double to get $\fbox {34}$
So I see $\fbox {41}$ altogether.
Note: this is very error prone. I would advise checking it with extreme skepticism.
| {
"pile_set_name": "StackExchange"
} |
Q:
Change text of canvas without using canvas.getActiveObject() method
without clicking on the canvas area, I want to change text once it is added on canvas. in my code i have to select canvas text first and then it change on typing in textarea.
here is my code:
//html
<canvas id="c" width="400" height="200"></canvas>
<br>
Change Text :
<textarea style="margin:0px;" id="textinput" rows="5" ></textarea>
//script
var canvas = new fabric.Canvas('c');
var text = new fabric.Text('Honey', {
fontSize: 100,
left: 50,
top: 50,
lineHeight: 1,
originX: 'left',
fontFamily: 'Helvetica',
fontWeight: 'bold'
});
canvas.add(text);
$("#textinput").keyup(function(event) {
//document.getElementById('textinput').addEventListener('keyup', function (e) {
// alert("hi");
var obj = canvas.getActiveObject();
if (!obj) return;
obj.setText(event.target.value);
canvas.renderAll();
});
//Css
canvas { border:1px solid #000; }
.controles { margin:50px 0; }
Any one have any idea how to do this ?
Here is my Fiddle Demo.
A:
This should do the trick:
canvas.getObjects()[0].text = "Boo Boo";
canvas.renderAll();
Here's a fiddle: http://jsfiddle.net/xadqg/
This of course will get the first object, but since you only have one this should be fine.
Feel free to change Boo Boo to whatever you want like $(this).val()
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make a httprequest body in asp.net
I've always done my web services in PHP. Now I'm trying some ASP.NET on a project and I found myself on a tricky situation. I have the following C# code, behaving as a "client"
public void sendRequest(string URL, string JSON)
{
ASCIIEncoding Encode = new ASCIIEncoding();
byte[] data = Encode.GetBytes(JSON);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(URL);
request.Method = "POST";
request.ContentLength = data.Length;
request.ContentType = "application/x-www-form-urlencoded";
request.CookieContainer = cookieContainer;
Stream dataStream = request.GetRequestStream();
dataStream.Write(data, 0, data.Length);
dataStream.Close();
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
WebHeaderCollection header = response.Headers;
var encoding = ASCIIEncoding.ASCII;
string responseText;
using (var reader = new System.IO.StreamReader(response.GetResponseStream(), encoding))
{
responseText = reader.ReadToEnd();
}
returnRequestTxtBx.Text = responseText;
}
Well, now I want to handle on the ASPX.CS side...
my question is... how do I access the data I sent as a POST?
Is there a way in the "Page_Load" method that I can handle the JSON I sent?
A:
To read post method data on your server side read HttpContext.Request.Form method:
protected void Page_Load(object sender, EventArgs e)
{
string value=Request.Form["keyName"];
}
Or if you want to access row body data simply read: Request.InputStream.
And if you want handle Json format consider Newtonsoft.Json packege.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add a time value to a datetime value in Python?
I have a simple question (at least I thought so), I want to add a time value to a datetime value in Python. The values are read from an excel file.
I have the following code:
import xlrd
from datetime import time, datetime, timedelta
book = xlrd.open_workbook('C:\\Users\eline\Documents\***\***\Python\Example 1.xlsx')
sh = book.sheet_by_name("test")
arr_time = datetime(*xlrd.xldate_as_tuple(sh.cell_value(1,2), book.datemode))
print(arr_time)
a2 = sh.cell_value(1,5)
# converting float from excel to time value
print(int(a2*24*3600))
x = int(a2*24*3600)
slack_time = time(x//3600, (x%3600)//60, x%60)
print(slack_time)
new_arr_time = arr_time + slack_time
print(new_arr_time)
arr_time is here a datetime value which can vary e.g.:
2016-08-28 13:10:00
slack_time is here a time in minutes (sometimes hours) which can vary e.g.:
00:15:00
I would like to add the slack time (e.g. 15 minutes) to the arr_time. Thus for this example I would like to get the following output for new_arr_time:
2016-08-28 13:25:00
However, when running my code I get the following error: "TypeError: unsupported operand type(s) for +: ‘datetime.datetime’ and ‘datetime.time’".
From this, I understand that I cannot add a time value to a datetime value, but when converting slack_time to a datetime value and then adding slack_time to arr_time I get a similar error (although subtracting works that way). I know I can use timedelta(minutes = 15) but since the values from the excel file vary and sometimes contain hours this does not work for me.
So my question is: how can I add a time value which is read from excel to a datetime value?
A:
You should make float value you're using to build slack_time into a time duration i.e. a datetime.timedelta object. Then it can be added to the datetime object:
>>> x = 0.010416666666666666
>>> timedelta(days=x)
datetime.timedelta(0, 900)
>>> 900/60
15 # the fifteen minutes you had earlier
So your code becomes:
from datetime import datetime, timedelta
new_arr_time = arr_time + timedelta(days=float(sh.cell_value(1,5)))
| {
"pile_set_name": "StackExchange"
} |
Q:
Display Image and Text next to each other HTML
I'm trying to display an image and some text on my webpage floating next to each other as you can see below.
I've tried basically all the methods suggested in these two previous SO questions I found on this topic:
How to display items side-by-side without using tables?
HTML Code to put image in left and text in right side of screen with footer below?
However, no matter what combinations I try, this is the result that I obtain:
This is the HTML code for the first example (which seems not to work at all):
<div class="cf">
<img src="//upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Balzac.jpg/220px-Balzac.jpg" width=100>
<div>some text here</div>
</div>
This is the HTML code for the second example, which differs cause the text is not wrapped into the <div> container (but seems to work only for a limited amount of text):
<div class="cf">
<img src="//upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Balzac.jpg/220px-Balzac.jpg" width=300>
some text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text heresome text here
</div>
The css file is from Nicholas Gallagher's micro clearfix:
/**
* For modern browsers
* 1. The space content is one way to avoid an Opera bug when the
* contenteditable attribute is included anywhere else in the document.
* Otherwise it causes space to appear at the top and bottom of elements
* that are clearfixed.
* 2. The use of `table` rather than `block` is only necessary if using
* `:before` to contain the top-margins of child elements.
*/
.cf:before,
.cf:after {
content: " "; /* 1 */
display: table; /* 2 */
}
.cf:after {
clear: both;
}
/**
* For IE 6/7 only
* Include this rule to trigger hasLayout and contain floats.
*/
.cf {
*zoom: 1;
}
Can you please tell me what is going wrong and how to fix this?
A:
Demo
css
img {
display: inline-block;
vertical-align: middle; /* or top or bottom */
}
.text {
display: inline-block;
vertical-align: middle; /* or top or bottom */
}
html
<div class="cf">
<img src="//upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Balzac.jpg/220px-Balzac.jpg" width="100px" />
<div class="text">some text here</div>
</div>
Final Demo
css
img {
display: inline-block;
vertical-align: middle;
width: 100px;
}
.text {
display: inline-block;
vertical-align: middle;
width: calc(100% - 100px);
}
A:
Your problem is that you have applied the clearfix but there is no float applied! Try adding the following
.cf img {float:left;}
.cf div {float:left;}
Demo
Clearfix .cf does nothing to float. It's purpose is to ensure the "parent" element of floated elements "expand" to contain the floated elements. Adding a backgound-color demonstrates this: http://jsfiddle.net/kxpur7z3/1/
My code in the answer floats each of the elements to the left. Note that "floating" removes the elements from the "natural flow" of the document.
Clearfix Demo
Here are a couple of good references to continue with:
https://developer.mozilla.org/en-US/docs/Web/CSS/float
http://www.sitepoint.com/web-foundations/floating-clearing-css/
So you want lots of text. Well as block and inline-block elements expand to fit their content you need to apply some width attributes. YOu have some options here.
Apply a specific width to the text: width:80%, width:300px etc
Applying a calculated width to the text (thanks @ 4dgaurav for reminding me of this): width:calc(100% - 100px)
Go dynamic on both image and text with complimentay percentages: img {width:20%;} div {width:80%;}
Demo of various options
| {
"pile_set_name": "StackExchange"
} |
Q:
Regular Expression: Insert between two characters
My regular expression skills aren't great. I'm using Sublime Text and I want to replace everything between the first two slashes (/) with a different name. (This is for a URL flattening project.)
IMG SRC="/testing/graphics/real.gif"
A:
You can replace the result of following regex :
^/([^/]*)/(.*)
with :
/your_word/\2 #or /your_word/$2 for some languages
For example in python you can do :
>>> re.sub(r'^/([^/]*)/(.*)',r'/-------/\2',"/testing/graphics/real.gif")
'/-------/graphics/real.gif'
| {
"pile_set_name": "StackExchange"
} |
Q:
Where do couples who are non-UK citizens register the birth of their child in the UK?
My family and I live in the UK; we are not UK citizens. We had a baby born in the UK recently. Where do I register the birth of our baby in the UK? My wife was told to make an appointment with our GP, but information I found here seems to indicate otherwise.
A:
It turned out that one must register for the birth of a child at the Register Office, not the GP. The child has to be registered with the GP, but this is for health services.
I did not make an appointment at the Register Office in advance, but I was able to get an appointment on the same day. The whole process at the Register Office took about half an hour. I was only asked for a proof of identity; no other documents were required.
A short version of the birth certificate, which was handwritten, was given for free. This contains only the child's details, but does not contain the parents' details. A long version can be purchased; this is typed and contains the parents' details in addition to the child's details.
As far as I can see, there is nothing more to it for the non citizens than for the citizens. For further information, see https://www.gov.uk/register-birth.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does my Create-React-App on Heroku have NODE_ENV=development?
I want to use NODE_ENV in my Create-React-App as described in https://medium.com/@tacomanator/environments-with-create-react-app-7b645312c09d
But when I run it on Heroku, process.env reports { NODE_ENV: "development", PUBLIC_URL: "" } - why? NODE_ENV is production in my Heroku dashboard.
A:
I solved it myself with the create-react-app-buildpack buildpack:
heroku buildpacks:add https://github.com/mars/create-react-app-buildpack.git
And then redeployed my app to Heroku.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to install a newer version of libav than what is in the repos
I'm currently running debian 7 wheezy, but the package management system is the same as ubuntu so I thought I'd ask here. I'm trying to install at least version 9.x of libav. The current version in the repos is 0.8.x. I added the debian wheezy backports repo to my sources.list but it doesn't have any newer versions of libav. How can I install a newer version?
A:
I wasn't running the backports install correctly. The -t wheezy-backports is necessary.
It installs version 6.x, but it has the options I need.
http://backports.debian.org/Instructions/
apt-get -t wheezy-backports install "libav-tools"
| {
"pile_set_name": "StackExchange"
} |
Q:
Wrong width and position of non-attached horizontal QML ScrollBar (and ScrollIndicator) when I resize parent element from RTL
The problem is in position and width of the horizontal scrollbar and position of the content that has been scrolled when I resize the width of the window (parent) element in RTL direction.
When I do these steps:
Resize the width of window in LTR direction.
Everything is working fine.
Change scrollbar position to any other position that is different from 0.0, (for example move it all the way to the right side)
Resize the width of window in opposite (RTL) direction
Scrollbar starts to behave odd and scrollable content is in the wrong position
I get this situation:
Width of scrollbar's contentItem (aka scrollbar's handle) is wrongly calculated.
Scrollbar's position is also wrongly calculated
The content is not scrolled to the right place (property "x" of scrollable content is wrongly calculated )
What I want is that:
the width of scrollbar's contentItem (aka scrollbar's handle) increases proportionately as the width of the window increases
the content (which was in the position that its right side was completely visible) should stay in that position. And with the expansion of the window, on the left side, the other part of the content that was not visible until then, should start to show up.
The same things are happening if I try to use non-attached QML ScrollIndicator.
This seams to be the bug in QML, and I have reported it, but I need the solution now... If anyone could help, it would be great. Thanks!
import QtQuick 2.7
import QtQuick.Window 2.2
import QtQuick.Controls 2.3
Window
{
id: main
width: 400
height: 200
Rectangle
{
id: frame
clip: true
anchors.fill: parent
color: "purple"
Text
{
id: content
text: "ABCDE"
font.pixelSize: 160
x: -hbar.position * width
}
ScrollBar
{
id: hbar
hoverEnabled: true
active: true
policy: ScrollBar.AlwaysOn
visible: size < 1
orientation: Qt.Horizontal
size: frame.width / content.width
anchors.left: parent.left
anchors.right: parent.right
anchors.bottom: parent.bottom
background: Rectangle
{
color: "black"
}
}
}
}
scrollbar behavior
A:
You should consider using a Flickable instead of setting the position of your text manually:
Flickable {
anchors.fill: parent
contentWidth: content.paintedWidth
boundsBehavior: Flickable.StopAtBounds
ScrollBar.horizontal: ScrollBar {
hoverEnabled: true
active: true
policy: ScrollBar.AlwaysOn
anchors.left: parent.left
anchors.right: parent.right
anchors.bottom: parent.bottom
background: Rectangle {
color: "black"
}
}
Rectangle {
id: frame
clip: true
anchors.fill: parent
color: "purple"
Text {
id: content
text: "ABCDE"
font.pixelSize: 160
}
}
}
Edit:
If you really need the ScrollBar to be attached to a Rectangle, you can add bounds to your Text position:
x: Math.min(0, Math.max(-hbar.position * width, frame.width - content.width))
with
ScrollBar {
visible: frame.width < content.width
// ...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
apps and plugin revisions, making an Origen app production ready
Our development app, which has ~12 homegrown plugins and many Origen gems, needs to move from development to production level quality. I see many miscellaneous topics starting here but I don't see much on app/plugin versioning, Origen modes (debug and production) and items such as reference files and examples. I know there is the 'origen specs' command which runs rspec but what about 'origen test' and other related items.
thx
A:
Yes documentation on this topic is a bit thin on the ground, here is some info to remedy that:
Runtime mode
Yes Origen does have the concept of debug and production modes - http://origen-sdk.org/origen/guides/runtime/mode/
Running in production mode does give you some rudimentary protection against local edits finding their way into production builds, but in reality I think there are many projects in production today which just use debug mode all the time.
I think the main benefit of production mode is that you can hook into it to implement domain specific checks that you care about. For example, in one of my apps I have a before_generate callback handler which checks that the product firmware we are using is the latest released version when we are building for production.
Another useful technique is to embed the mode in things like the program, pattern or test names generated by your application. That makes it very clear if someone has used a debug build for something that they want to release to production.
Versioning
Company internal plugins should be version tagged and released in the same way as Origen and the open source plugins are.
Normally everyone does that by running the origen rc tag command, which will tag the plugin, maintain a release history, and build and publish the gem to your gem server.
Plugins should generally be careful not to lock to specific versions of its dependencies in its .gemspec file, and should instead specify a minimum version. Something like this is good, which means any version 1 of the dependency that is greater than 1.2.3: '~>1', '>1.2.3'. See here for more on how to specify gem versions: http://guides.rubygems.org/patterns/#declaring-dependencies
Top-level applications on the other hand generally want to lock to specific versions of gems within their Gemfile so that their builds are reproducible. In theory you get that for free by checking in the Gemfile.lock file which means that when bundler builds the gem bundle in a new workspace it will use the exact versions specified in that file, even though the rules in the Gemfile may allow a range of possible versions.
In practice, many engineers prefer to take a stricter/more declarative approach by specifying absolute versions within their Gemfile.
Unit Testing
Unit testing is almost always done via the rspec tool and by convention launched via the origen specs command.
Unit tests are good because they can target very specific features and they are therefore relatively easy to debug when they fail.
These tests are often very useful for doing test driven development, particularly when creating new APIs. The technique is to use the act of writing to test to define the API you wish you had, then make the changes required to make the test pass.
The downside to these tests is that they take time to write and this often gets dropped when under time pressure.
There is also an art to writing them and sometimes it can be difficult for less experienced engineers to know how to write a test to target a specific feature.
For this reason, unit tests tend to be used more in plugins, particularly those which provide APIs to parent applications.
Top-level applications, particularly those concerned with generating patterns or test programs, tend to instead rely on diff-based testing...
Diff-based Testing
Diff-based or acceptance testing, means that someone blesses a particular output (i.e. a pattern or a test program file) as 'good' and then tests can be created which simply say that as long as the current output matches some previously known good output, then the test should pass.
If a diff is encountered, then the test will fail and an engineer has to review the changes and decide if it is unwanted and highlighting a genuine problem, or whether the change is acceptable/expected and in which case the new output should become the new known good reference.
The advantage to this style of testing is that it doesn't take any time to write the test and yet it can provide extremely high test coverage. The downside is that it can sometimes be hard to track down the source of an unwanted diff since the test covers a large surface area.
Another issue can be that un-wanted diffs can get lost in the noise if you make changes that globally affects all output. For that reason it is best to make such changes in isolation.
Many plugins and Origen itself implements this style of testing via a command called origen examples and the known good output is checked into the approved directory.
Some applications also implement a command called origen test which is simply a wrapper that combines origen specs and origen examples into a single command which is useful to create a combined test coverage analysis (more on that below).
You should refer to some of the open source repositories for examples of how to create these commands, but all new Origen application shells should come with example code commented out in config/commands.rb.
Checking in the approved output is OK when its footprint is relatively small, but for large production applications which may generate thousands of patterns or files and/or support many different targets, then it is not so practical to check all that in.
Also in that case, it sometimes becomes more useful to say "what has changed in the output compared to version X of the application?", rather than always comparing to the latest known good version.
You can manually run such tests by checking out a specific version of your application, generating the output and then use the origen save all command to locally save the approved output. Then checkout the latest version of your application, run the same generation command again and see if there are any changes.
That workflow can become tedious after a while and so Origen provides a helper for it via the Regression Manager: http://origen-sdk.org/origen/api/Origen/RegressionManager.html
It is common for application's to integrate that as a command called origen regression.
A complete implementation of and origen regression command from one of my apps is included below to give an example of how to create it.
In this case we are using Origen's LSF API to parallelize the commands, but if you don't use that just replace Origen.lsf.submit_origen_job cmd with system cmd to run your generation commands locally.
The regression manager will take care of replicating the same commands in the current and previous version of the application and providing you with the results of the diff.
Note that you can also supply build_reference: false if you want to re-run the regression against the same previous version of the app that you ran last time.
Test Coverage
Running any Origen command with the -c or --coverage switches will enable the generation of a test coverage report which you can review to see how well your test suite is doing.
Here is an example of a coverage report - http://origen-sdk.org/jtag/coverage/#_AllFiles
#commands/regression.rb
require 'optparse'
options = {}
default_targets = %w(product_a.rb product_b.rb)
short_targets = %w(product_a.rb)
opt_parser = OptionParser.new do |opts|
opts.banner = 'Usage: origen regression [options]'
opts.on('-t', '--target NAME1,NAME2,NAME3', Array, 'Override the default target, NAME can be a full path or a fragment of a target file name') { |t| options[:target] = t }
opts.on('-e', '--environment ENV', String, 'Override the default environment (tester).') { |e| options[:environment] = e }
opts.separator " (default targets: #{default_targets})"
opts.on('-s', '--short', 'Run a short regression (a single target)') { options[:short] = true }
opts.separator " (default short target: #{short_targets})"
opts.on('-f', '--full', 'Run a full regression (all production targets)') { options[:full] = true }
opts.on('-a', '--all', 'An alias for --full') { options[:full] = true }
opts.on('-c', '--ci', 'Build for bamboo CI') { options[:ci] = true } # avoids problematic targets for continuous integration
opts.on('-n', '--no_reference', 'Skip the reference build (careful!). Use when re-running the same regression back-back.') { options[:build_reference] = false }
opts.on('--email', 'Send yourself an email with the results when complete') { options[:send_email] = true }
opts.on('--email_all', 'Send the results email to all developers') { options[:email_all_developers] = true; options[:send_email] = true }
# Regression types-- saying no is easier to define the logic
opts.on('--no-patterns', 'Skip the vector-based patterns in the regression test') { options[:no_patterns] = true }
opts.on('--no-programs', 'Skip the programs in the regression test') { options[:no_programs] = true }
# Regression type only-- have to omit all other regression types
opts.on('--programs-only', 'Only do programs in the regression test') do
options[:no_patterns] = true
end
opts.separator ' (NOTE: must run program-based regression first to get pattern list prior to pattern regressions)'
opts.on('--patterns-only', 'Only do vector-based patterns in the regression test') do
options[:no_programs] = true
end
opts.on('-v', '--version type', String, 'Version for the reference workspace, latest, last, tag(ex: v1.0.0) or commit') { |v| options[:version] = v }
opts.on('--service_account', 'This option is set true only when running regressions through the Bamboo CI, a normal user should never have to use it') { options[:service_account] = true }
opts.on('--reference_workspace location', String, 'Reference workspace location') { |ref| options[:reference_workspace] = ref }
opts.separator ''
opts.on('-h', '--help', 'Show this message') { puts opts; exit }
end
opt_parser.parse! ARGV
if options[:version]
v = options[:version]
end
if options[:reference_workspace]
ref = options[:reference_workspace]
end
if options[:target]
t = options[:target]
t[0].sub!(/target\//, '') if t.length == 1 # remove path if there-- causes probs below
elsif options[:short]
t = short_targets
elsif options[:full]
t = Origen.target.production_targets.flatten
else
t = default_targets
end
if options[:environment]
e = options[:environment]
e.sub!(/environment\//, '') # remove path if there-- causes probs below
else
e = 'v93k.rb' # default environment
end
options[:target] = t
options[:environment] = e
def highlight(msg)
Origen.log.info '######################################################'
Origen.log.info msg
Origen.log.info '######################################################'
end
# Required to put the reference workspace in debug mode since the regression.rb file is modified,
# in future Origen should take care of this
Origen.environment.temporary = "#{options[:environment]}"
Origen.regression_manager.run(options) do |options|
unless options[:no_programs]
highlight 'Generating test programs...'
Origen.target.loop(options) do |options|
cmd = "program program/full.list -t #{options[:target]} --list #{options[:target]}.list -o #{Origen.root}/output/#{Origen.target.name} --environment #{options[:environment]} --mode debug --regression"
Origen.lsf.submit_origen_job cmd
end
highlight 'Waiting for test programs to complete...'
Origen.lsf.wait_for_completion
end
unless options[:no_patterns]
highlight 'Generating test patterns...'
Origen.target.loop(options) do |options|
# Generate the patterns required for the test program
Origen.file_handler.expand_list("#{options[:target]}.list").each do |pattern|
Origen.lsf.submit_origen_job "generate #{pattern} -t #{options[:target]} -o #{Origen.root}/output/#{Origen.target.name} --environment #{options[:environment]} --regression"
end
end
end
end
| {
"pile_set_name": "StackExchange"
} |
Q:
how to pass dictionary to test
I am quite new in the writing of tests (for python) and so I have now the question how can I pass a dictionary to a test-function? At the moment I do the following:
import os
import sys
import shutil
from app.views import file_io
import pytest
from tempfile import mkdtemp
import codecs
@pytest.fixture()
def tempdir():
tempdir = mkdtemp()
yield tempdir
shutil.rmtree(tempdir)
articles = [
["", "README.md", "# Hallo Welt", "<h1>Hallo Welt</h1>\n"],
["test", "article.md", "# Hallo Welt", "<h1>Hallo Welt</h1>\n"]
]
@pytest.mark.parametrize("dir, file, content_plain, content_md", articles)
def test_readRaw(tempdir, dir, file, content_plain, content_md):
dest_path=os.path.join(tempdir, dir)
os.makedirs(dest_path, exist_ok=True)
with codecs.open(os.path.join(dest_path, file), 'w', 'utf-8') as fh:
fh.write(content_plain)
assert file_io.readRaw(os.path.join(dest_path, file)) == content_plain
and my idea/hope is that I could modify the code so I can do something like:
articles = [
{ "dir": "",
"filename": "README.md",
"content_md": "# Hello World",
"content_html": "<h1>Hello World</h1>\n" },
{ "dir": "test",
"filename": "article.md",
"content_md": "# Hallo Welt",
"content_html": "<h1>Hallo Welt</h1>\n"}
]
@pytest.mark.parametrize(**articles, articles)
def test_readRaw(tempdir, **articles):
with codecs.open(os.path.join(dir, file), 'w', 'utf-8') as fh:
fh.write(content_md)
assert file_io.readRaw(os.path.join(dir, file)) == content_md
especially I would like to avoid the mentioning of all keys so I can extend the dictionary if I miss something without modified all tests.
Maybe this is is a silly question but as I say I am just beginning with this topic and so I would very thanksful for every a hint how can I do this (or what is a better way).
best regards
Dan
A:
Instead of trying to splat / unsplat, try taking article as a parameter:
@pytest.mark.parametrize('article', articles)
def test_readRaw(tempdir, article):
# use `article['foo']` here...
Another option (utilizing python3.6+ features) is to expand the keys manually -- though you have to be careful that you define each of the dictionaries in the same order
@pytest.mark.parametrize(tuple(articles[0]), [tuple(dct.values()) for dct in articles])
def test_readRaw(tempdir, dir, file, content_plain, content_md):
...
for what it's worth, I think you'd be sacrificing some readability (and making the test particularly fragile) by taking the second approach
~related advice
you can use the built in tmp_path / tmpdir fixtures instead of building your own
your function-under-test doesn't actually depend on dir or file, you'd be better off not parametrizing these
taking both of those into account, your test becomes much simpler to do with classic parametrization (a simple table of inputs / outputs):
@pytest.mark.parametrize(
('content_plain', 'content_md'),
(
("# Hallo Welt", "<h1>Hallo Welt</h1>\n"),
("# ohai", "<h1>ohai</h1>\n"),
),
)
def test_readRaw(tmpdir, content_plain, content_md):
f = tmpdir.join('f')
f.write(content_plain)
assert file_io.readRaw(f) == content_md
disclaimer: I'm one of the current core devs on pytest
| {
"pile_set_name": "StackExchange"
} |
Q:
Mv files contained in directories to directories/new path
I'm working with macOS Sierra.
I have ~ 1000+ directories with lots of files in it. Word, Excel and Zipped documents in it. Only one sub level. Important : there is spaces in the filenames and in the folder names.
We decided to change the arborescence of the files ; all the files in each directory need to be moved to a subdirectory in it called "Word & Excel" before merging with another directory tree.
I managed to create the Word & Excel directory with this command :
for dir in */; do mkdir -- "$dir/Word & Excel"; done
Basically, I just want to do
for dir in */; do mv $dir/* "./Word & Excel"; done
It is not going to work. I even do not understand if the problem is with the $dir — I need the double quote to avoid the space problem, but the asterisk is not going to work if I work with the double quote... — or with the asterisk.
I tried to get a cleaner version by following a previous answer found on the web to a similar problem, clearing the subfolder of the results (and trying basically to avoid my wildcard problem) :
for dir in */; do mv `ls -A "$dir" | grep -v "Word & Excel"` ./"Word & Excel" | cd ../ ; done
I am completely stuck.
Any idea how to handle this?
A:
This should make it, even on Mac OS X. And yes, find sometimes needs the anchor directory.
while read dir; do
mkdir -p "$dir/Word & Excel"
find "$dir" -maxdepth 1 -type f -exec mv {} "$dir/Word & Excel" \;
done < <(find . -mindepth 1 -maxdepth 1 -type d)
This loops over the sub-directories of the current directory (one sub-level only), for each of them (dir), creates the dir/Word & Excel sub-sub-directory if it does not already exist, finds all regular files immediately inside dir and moves them in the dir/Word & Excel. And it should work even with crazy directory and file names.
This being said, if you could convince your boss not to use unusual file or directory names, you life with bash and the Command Line Interface (CLI) would probably be much easier.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is limiting the performance or setting quotas on an Application domain possible?
I know I can view certain counters such as memory usage but can I impose limits on individual app domains?
A:
I used to work on a system where we were attempting to do something similar.
With the regular .NET CLR you only get very crude controls. You can find out how much memory the AppDomain is using, but it doesn't tend to be up to date, and as memory pressure increases - becomes completely unreliable.
There may be a way, however, to do this if you're willing to go to the level of hosting the CLR through C++, and using the CLR Hosting APIs.
I've been told it's possible to do things such as intercept memory allocation requests from the hosted CLR. You may also be able to limit I/O requests and CPU utilisation too.
| {
"pile_set_name": "StackExchange"
} |
Q:
'Horeca', is it English? Alternatives?
In Dutch there's a quite commonly used word that denotes the commercial sector around selling food and beverages for immediate (or near-immediate, e.g. take-out meals) consumption: horeca. (This usually also includes snackbars and the like, but not supermarkets)
I'm in the process of creating an English version of a website that has it as a menu item, and I'm looking for a translation of approximately similar size (i.e. not a full sentence). I found that the word horeca also exists in English, but the Wikipedia page is quite small and seems to be written by a Dutch native. The definition given is the exact definition of the word I'm looking for, but I'd rather have something that's less obscurely used in English..
Horeca (or HORECA) is the sector of the food service industry that
consists of establishments which prepare and serve food and beverages.
The term is a syllabic abbreviation of the words
Hotel/Restaurant/Café.
I'm edging towards distrusting the fact that it's a word that English natives would understand. Does anyone have any alternatives with a simialr meaning?
A:
The US National Restaurant Association calls this the restaurant industry. More generally, the USDA calls the sector foodservice outlets:
Foodservice outlets are facilities that serve meals and snacks for
immediate consumption on site (food away from home).
I do not know the corresponding UK or AUS terms, if that is what you are looking for.
| {
"pile_set_name": "StackExchange"
} |
Q:
Celebración para la graduación del sitio en España
Estoy planeando la celebración de nuestra graduación en España pero necesito saber qué ciudad funcionará mejor - Madrid o Barcelona. La fecha cae en la semana de mayo 22-28.
Por favor contesten con la ciudad en donde les es más fácil asistir. Votaremos por la mejor sacaré de allí el lugar donde tendremos nuestra celebración.
¡A votar!
La votación terminará el 3 de mayo.
La ciudad en donde tendremos nuestra celebración será - ¡Madrid!
Acualización:
La celebración en España ocurrirá en Madrid el 25 de mayo. Gracias a la amistad que tenemos con IronHack, tendremos el evento en sus oficinas....y vaya, ¡que oficinas! Tuve la oportunidad de ver el espacio que ocupan por medios virtuales, y como dicen por ahí, esta bien chulo el lugar. Habrán charlas dadas por nuestro querido Konamiman, alumnus de IronHack, y también por mi.
Nos fascinaría verlos el 25 de mayo. El evento es gratis y no se requiere nada para asistir. Tendremos comida y bebida y la oportunidad de conocernos en persona!
Para apuntarse, favor de hacerlo aquí por EventBrite. Las direcciones y la agenda se puede encontrar ahí. Si tienen preguntas o comentarios, favor de hacerlas aquí.
¡Nos vemos pronto!
Estamos ya en Madrid para celebrar nuestra graduación. ¡El espacio está hermoso al igual que la gente! Al ver el horario, habíamos puesto que tendríamos un livestream pero no será posible. Lamento el no poder hacerlo, pero tomaremos fotos y las compartimos aquí.
Fotos del evento:
A:
Podría asistir fácilmente a una celebración en Madrid.
A:
Podría asistir fácilmente a una celebración en Barcelona.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can a struct inherit from a class in type parameters constrains?
I saw the following line of code:
class Sample<T,U> where T:class where U: struct, T
In the case above, parameter U is value type, and it derives from reference type T.
How can that line be legal?
Also, if a value type inherits from a reference type, where is memory allocated: heap or stack?
A:
Contrary to another answer, there are types beyond T=System.Object where this compiles:
class Samplewhere T:class where U:struct, T
The "T : class" constraint doesn't actually mean that T has to be a class. It means T has to be a reference type. That includes interfaces, and structs can implement interfaces. So, for example, T=IConvertible, U=System.Int32 works perfectly well.
I can't imagine this is a particularly common or useful constraint, but it's not quite as counterintuitive as it seems at first sight.
As to the more general point: as Obiwan Kenobi says, it all depends on your point of view. The CLI spec has quite a complicated explanation of this, where "derives from" and "inherits from" don't mean quite the same thing, IIRC. But no, you can't specify the base type of a value type - it's always either System.ValueType or System.Enum (which derives from System.ValueType) and that's picked on the basis of whether you're declaring a struct or an enum. It's somewhat confusing that both of these are, themselves, reference types...
| {
"pile_set_name": "StackExchange"
} |
Q:
Reload a Web Page Using C#
I have a web page that prompts for user input via DropDownLists in some table cells. When a selection is made the selection replaces the DropDownList so that the action can only be performed once. In the case that a change needs to be made I want to be able to click a button that reloads the page from scratch. I have Googled and Googled but I have not managed to find a way to do this.
Any advice is appreciated.
Regards.
A:
Put a link on the page with the text "Reload" and the url the url of the page. It's perfectly valid to have a page with a link to itself.
If you don't like the link idea, use a standard Button and in the click event, use Response.Redirect to redirect to the current page.
A:
You can set an OnClick for your button that resets each DropDownList's SelectedIndex to 0 instead of reloading the page from scratch. Alternatively, you can set a Response.Redirect([the page's url]) into the OnClick as is suggested here.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to interpret the expression "кулинарный опыт кулинарным опытом"?
Кулинарный опыт кулинарным опытом, но вы еще совсем дети.
Context: Two kids have outstanding culinary skills, especially given their young age.
I suppose the phrase literally translates as:
Culinary skills are in culinary skills (only) -- not to be associated with anything else.
I wonder if it basically means something along the lines of:
You two have outstanding culinary skills, I'll grant you that, but leaving them aside, you're still kids, all right.
{or}: You two have remarkable culinary skills, I'll give you that, but it doesn't change the fact that you're still kids through and through.
(Q1): How do you paraphrase the expression if instead of the adjective "кулинарный" you have a genitive noun like "работы"?
??? Опыт работы опытом работы, но ...
(Q2): Are there other similar expressions that use a deliberately repetitious wording like this? Or can you apply this "nominative + instrumental" construction to virtually any context? For instance, can you express this part in bold with this construction?
I love music, but (for all my love of it) I need to be more realistic in choosing my career.
A:
This "[smth.] in nominative+instrumental" construction is used in conversational speech, indeed, when you want to convey "[smth.] is fine [by itself]/does as it must do, but ..." or "putting [smth.] aside, ...".
Mildly confrontational, somewhat assuming greater expertise on the speaker' part (perhaps correctly). "It's right/fine of you to do/say/point out/be capable of [smth.], but [my greater expertise] leads to ...".
A:
I would actually translate it as "Having some culinary skills is fine and dandy, but you're still just kids."
So yes, it means that you are agreeing with some point that's been made, but you want to point out some larger context or some circumstances that make things a bit more complicated.
There are a few expressions like that, one that comes to mind is "Работа работой, а обед по расписанию".
For the second part of the question - yes, you can apply it in a number of situations, though not sure about ANY context. For your example: "Музыка музыкой, но мне надо как то деньги зарабатывть".
You can come up with any number of examples - like "Любовь любовью, но у тебя экзамены на носу".
You can also in many cases (but not always) use " - это, конечно, хорошо, но" Instead of repeating the word or expression. Like so "Музыка - это, конечно, хорошо, но мне надо как то деньги зарабатывть" or in you original sentence - "Кулинарный опыт - это, конечно, хорошо, но...".
| {
"pile_set_name": "StackExchange"
} |
Q:
Dict with curly braces and OrderedDict
I thought I set out a simple project for myself but I guess not. I think im using the Ordered dict function long because I keep getting:
ValueError: too many values to unpack (expected 2)
Code:
import random
import _collections
shop = {
'bread': 2,
'chips': 4,
'tacos': 5,
'tuna': 4,
'bacon': 8,
}
print(shop)
'''
items = list(shop.keys())
random.shuffle(items)
_collections.OrderedDict(items)
'''
n = random.randrange(0, len(shop.keys()))
m = random.randrange(n, len(shop.keys()))
if m <= n:
m += 1
print(n, " ", m)
for key in shop.keys():
value = shop[key] * random.uniform(0.7,2.3)
print(key, "=", int(value))
if n < m:
n += 1
else:
break
I would like for this code to mix up the dictionary, then multiply the values by 0.7 - 2.3. Then loop within the range 0-5 times in order to give me few random keys from the dictionary.
I have placed ''' ''' over the code that I struggle with and gives me the errors.
A:
You are very close, but you cannot just give the list of keys ot the new OrderedDict, you must give the values too... try this:
import random
import collections
shop = {
'bread': 2,
'chips': 4,
'tacos': 5,
'tuna': 4,
'bacon': 8,
}
print(shop)
items = list(shop.keys())
random.shuffle(items)
print(items)
ordered_shop = collections.OrderedDict()
for item in items:
ordered_shop[item] = shop[item]
print(ordered_shop)
Example output:
{'chips': 4, 'tuna': 4, 'bread': 2, 'bacon': 8, 'tacos': 5}
['bacon', 'chips', 'bread', 'tuna', 'tacos']
OrderedDict([('bacon', 8), ('chips', 4), ('bread', 2), ('tuna', 4), ('tacos', 5)])
You could also do this like this (as pointed out by @ShadowRanger):
items = list(shop.items())
random.shuffle(items)
oshop = collections.OrderedDict(items)
This works because the OrderedDict constructor takes a list of key-value tuples. On reflection, this is probably what you were after with your initial approach - swap keys() for items().
| {
"pile_set_name": "StackExchange"
} |
Q:
Render to same Container with multiple chart types and options
Depending the selected chart type (line/column/scatter, etc), is it possible to have multiple option settings in one single render? For instance I have two chart types:
var options = {
chart: {
renderTo: 'container',
type: 'column',
},
title: {
text: title
},
subtitle: {
text: subtitle
},
xAxis: {
categories: []
},
yAxis: {
min: 0,
title: {
text: 'Percentage'
}
},
legend: {
layout: 'vertical',
backgroundColor: '#FFFFFF',
align: 'left',
verticalAlign: 'top',
x: 100,
y: 70,
floating: true,
shadow: true
},
tooltip: {
formatter: function () {
return Math.round(this.y * 100) + '%';
}
},
plotOptions: {
column: {
pointPadding: 0.2,
borderWidth: 0
}
},
series: []
};
And:
var options = {
chart: {
renderTo: 'container',
type: 'line'
},
title: {
text: title
},
subtitle: {
text: subtitle
},
xAxis: {
categories: []
},
yAxis: {
min: 0,
title: {
text: 'Count'
}
},
tooltip: {
formatter: function () {
var s = '';
$.each(this.points, function (i, point) {
s += point.series.name + ': ' + point.y + '<br/>';
});
return s;
},
shared: true
}
},
legend: {
layout: 'vertical',
align: 'right',
verticalAlign: 'top',
x: -10,
y: 100,
borderWidth: 0
},
series: []
};
Is it possible to combine them together as in one $(document).on("click", ".render-Chart", function () {...} or var chart? Thank you!
A:
I didn't really get the question. But if the question is, "Is it possible?" Then why don't you directly try doing it. If you have tried and it did not turn out as expected, get back here with a jsFiddle tryout and your expected vs actual behaviors.
If what you are asking is if you have two charting options one for line and another for column, both pointing to the same container, and based on some click or condition you would either want to plot a line or column, but not both then yes very much possible. If you are afraid that you have mentioned the same container id in both the options causing some conflict then, make a note that the options by themselves are just variables that store data/info and don't modify the DOM or execute themselves, it's only the Highchart's constructor that takes these as parameter and plots the chart, so as long as the constructor is called only once, you should be safe using the same container n number of times, or the latest constructor's chart would be plotted to that container.
var lineOptions = {
chart: {
type: 'line',
renderTo: 'container',
//...
}
//...
};
var columnOptions = {
chart: {
type: 'column',
renderTo: 'container',
//...
}
//...
};
//...
var chart;
function onClick(){
if(...){
chart = new Highcharts.StockChart(lineOptions);
}else if(...){
chart = new Highcharts.StockChart(columnOptions);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to draw a sequence of lines with different opacities without blending overlaps but blending with the background?
I am trying to add basic pressure sensitivity to a drawing app.
I am running into an issue when I try to draw strokes, where the opacity depends on the pressure - the point of overlap of the line segments blends the two lines, which creates darker spots:
This is my code:
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, path.thickness);
CGContextSetStrokeColorWithColor(context, self.lineColor.CGColor);
for (Segment *segment in segments) {
CGContextMoveToPoint(context, segment.start.x, segment.start.y);
CGContextAddLineToPoint(context, segment.end.x, segment.end.y);
CGContextSetAlpha(context, segment.alpha);
CGContextStrokePath(context);
}
I want avoid the circles where the line caps meet, but still be able to have transparency over the background.
I have experimented with the context blend modes, but nothing is completely satisfactory - kCGBlendModeDestinationAtop gave me smooth lines, but there is an artifact at the very end of the line and I lose blending with the background.
The banding is not an issue.
A:
In case anyone has the same problem, the way I solved it was to draw the line segments into CGLayer (Core Graphics layer - not Core Animation layer and not Core Graphics Transparency layer).
I set the CGLayer context blend mode to DestinationAtop and then rendered the CGLayer in my real CGContext with normal blending.
Worked like a charm.
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting accurate time measurement with `perf-stat`
I try to benchmark simple "hello, world" programs written in several languages. I am n00b in this field, and the perf-stat manual is not making it easy to accomplish by lack of examples.
To do that I run perf-stat at high priority to avoid context switching. So I come up with something like that:
sudo chrt -f 99 perf stat -e cs -e cpu-clock ./hello_c
However, the results I get for the same program differ significantly. For instance the result for the same C compiled executable can be:
0 cs # 0.000 K/sec
0.42 msec cpu-clock # 0.612 CPUs utilized
0.000694107 seconds time elapsed
0.000713000 seconds user
0.000000000 seconds sys
or
0 cs # 0.000 K/sec
0.58 msec cpu-clock # 0.620 CPUs utilized
0.000936635 seconds time elapsed
0.000000000 seconds user
0.000940000 seconds sys
I this particular example there is 0.242528 msec of incompatibility even when context switches are in both examples equal 0.
Is there something I am missing, some calculation I need to do? Or it is not possible to get any closer results? Are there any other options to fix this problem then getting average of n executions?
A:
There are a variety of reasons you can see variation when you repeatedly benchmark what appears to be the same code. I have covered some of the reasons in another answer and it would be worthwhile to keep those in mind.
However, based on experience and playing the probabilities, we can eliminate many of those up front. What's left are the most likely causes of your relatively large deviations for short programs from a cold start:
CPU power saving and frequency scaling features.
Actual runtime behavior differences, i.e., different code executed in the runtime library, VM, OS or other supporting infrastructure each time your run your program.
Some caching effect, or code or data alignment effect that varies from run to run.
You can probably separate these three effects with a plain perf stat without overriding the event list, like:
$ perf stat true
Performance counter stats for 'true':
0.258367 task-clock (msec) # 0.427 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
41 page-faults # 0.159 M/sec
664,570 cycles # 2.572 GHz
486,817 instructions # 0.73 insn per cycle
92,503 branches # 358.029 M/sec
3,978 branch-misses # 4.30% of all branches
0.000605076 seconds time elapsed
Look first at the 2.572 GHz line. This shows the effective CPU frequency, calculating by dividing the true number of CPU cycles by the task-clock value (CPU time spent by the program). If this varies from run to run, the wall-clock time performance deviation is partly or completely explained by this change, and the most likely cause is (1) above, i.e., CPU frequency scaling, including both scaling below nominal frequency (power saving), and above (turbo boost or similar features).
The details of disabling frequency scaling depends on the hardware, but a common one that works on most modern Linux distributions is cpupower -c all frequency-set -g performance to inhibit below-nominal scaling.
Disabling turbo boost is more complicated and may depend on the hardware platform and even the specific CPU, but for recent x86 some options include:
Writing 0 to /sys/devices/system/cpu/intel_pstate/no_turbo (Intel only)
Doing a wrmsr -p${core} 0x1a0 0x4000850089 for each ${core} in your system (although one on each socket is probably enough on some/most/all chips?). (Intel only)
Adjust the /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq value to set a maximum frequency.
Use the userspace governor and /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed to set a fixed frequency.
Another option is to simply run your test repeatedly, and hope that the CPU quickly reaches a steady state. perf stat has built-in support for that with the --repeat=N option:
-r, --repeat=<n>
repeat command and print average + stddev (max: 100). 0 means forever.
Let's say you observe that the frequency is always the same (within 1% or so), or you have fixed the frequency issues but some variance remains.
Next, check the instructions line. This is a rough indicator of how much total work your program is doing. If it varies in the same direction and similar relative variance to your runtime variance, you have a problem of type (2): some runs are doing more work than others. Without knowing what your program is, it would be hard to say more, but you can use tools like strace, perf record + perf annotate to track that down.
If instructions doesn't vary, and frequency is fixed, but runtime varies, you have a problem of type (3) or "other". You'll want to look at more performance counters to see which correlate with the slower runs: are you having more cache misses? More context switches? More branch mispredictions? The list goes on. Once you find out what is slowing you down, you can try to isolate the code that is causing it. You can also go the other direction: using traditional profiling to determine what part of the code slows down on the slow runs.
Good luck!
| {
"pile_set_name": "StackExchange"
} |
Q:
Determine where documents differ with Python
I have been using the Python difflib library to find where 2 documents differ. The Differ().compare() method does this, but it is very slow - atleast 100x slower for large HTML documents compared to the diff command.
How can I efficiently determine where 2 documents differ in Python? (Ideally I am after the positions rather the actual text, which is what SequenceMatcher().get_opcodes() returns.)
A:
a = open("file1.txt").readlines()
b = open("file2.txt").readlines()
count = 0
pos = 0
while 1:
count += 1
try:
al = a.pop(0)
bl = b.pop(0)
if al != bl:
print "files differ on line %d, byte %d" % (count,pos)
pos += len(al)
except IndexError:
break
| {
"pile_set_name": "StackExchange"
} |
Q:
Java Runtime Textbox creating
Hello I want to create textboxes on a panel at runtime i.e. when i give input 3 and 4 in two textboxes then it will print textboxes in 3 row and 4 columns on button click event in swing.
Here is my code.
JFrame jf=new JFrame();
JPanel jp=new JPanel();
JTextField jt1=new JTextField();
JTextField jt2=new JTextField();
JLabel jl1=new JLabel("Enter Row");
JLabel jl2=new JLabel("Enter Column");
JButton jb1=new JButton("OK");
JButton jb2=new JButton("Cancel");
jf.setContentPane(jp);
jp.setLayout(null);
jp.setBackground(Color.CYAN);
jp.add(jb1);
jp.add(jt1);
jp.add(jt2);
jp.add(jl1);
jp.add(jl2);
jp.add(jb2);
jf.setVisible(true);
jf.setSize(500,500);
jt1.setBounds(200,20,50,30);
jt2.setBounds(200,60,50,30);
jl1.setBounds(90, 20, 80, 30);
jl2.setBounds(90,60,80,30);
jb1.setBounds(150, 100, 80, 80);
jb1.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
}
A:
Here is a rough code to without any validations and layout considerations. You may use this to fulfill your requirement further.
public class ClsCreateTextBoxes extends javax.swing.JFrame{
private javax.swing.JPanel jpInputPanel = null;
private javax.swing.JTextField jtfRows = null;
private javax.swing.JTextField jtfColumns = null;
private javax.swing.JButton jbCreateMatrix = null;
private javax.swing.JPanel jpMatrixPanel = null;
public ClsCreateTextBoxes(){
setSize(400, 400);
setDefaultCloseOperation(EXIT_ON_CLOSE);
getContentPane().setLayout(new java.awt.BorderLayout());
jpInputPanel = new javax.swing.JPanel(new java.awt.FlowLayout());
jtfRows = new javax.swing.JTextField(10);
jpInputPanel.add(jtfRows);
jtfColumns = new javax.swing.JTextField(10);
jpInputPanel.add(jtfColumns);
jbCreateMatrix = new javax.swing.JButton("Create");
jbCreateMatrix.addActionListener(new java.awt.event.ActionListener(){
public void actionPerformed(java.awt.event.ActionEvent ae){
// Assuming proper number is given
jpMatrixPanel.setLayout(new java.awt.GridLayout(Integer.parseInt(jtfRows.getText()), Integer.parseInt(jtfColumns.getText())));
for(int rowIndex = 0; rowIndex < Integer.parseInt(jtfRows.getText()); rowIndex ++){
for(int columnIndex = 0; columnIndex < Integer.parseInt(jtfColumns.getText()); columnIndex ++){
jpMatrixPanel.add(new javax.swing.JTextField(10));
pack();
}
}
}
});
jpInputPanel.add(jbCreateMatrix);
getContentPane().add(jpInputPanel, java.awt.BorderLayout.NORTH);
jpMatrixPanel = new javax.swing.JPanel();
getContentPane().add(jpMatrixPanel, java.awt.BorderLayout.SOUTH);
pack();
}
public static void main(String[] args){
ClsCreateTextBoxes createdTextBoxes = new ClsCreateTextBoxes();
createdTextBoxes.setVisible(true);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
what's the best way to re-index just the models that changed during solr downtime?
if I have millions of User records with some text fields getting indexed to solr on create and on update, how do I go back and re-index the few records that never made it to solr?
i.e. what if solr goes down for a few minutes during the day and about 300 records out of millions never got indexed.
I don't want to re-index millions of records, just the 300.
A:
A good way to manage this would be to just insert the record IDs into a queue table on create and update, and then have a process that runs later to index the records. That way if Solr goes down, you don't have to worry about which records weren't processed, they'll just continue sitting in the queue until processed. The advantage of this is that your database doesn't have to wait for the solr update to complete before completing the transaction. The downside is that Solr isn't going perfectly in sync with what's in the database. You can adjust how often the queue reading program runs to accommodate your needs for that.
| {
"pile_set_name": "StackExchange"
} |
Q:
ReferenceError: function is not defined on the onclick function
I get a "ReferenceError: punt is not defined" - everything seems to be right and I can't identify the mistake.
Here is the related code:
<script src="../Scripts/jquery-1.9.1.min.js" type="text/javascript">
function punt(rowIndex, Name, actionType) {
alert("hello");
}
</script>
and inside the ItemTemplate in the Repeater I have:
<input type="image" style="border-width:0" src='<%=ResolveUrl("~/css/images/icon_update.png") %>'
alt="Update Reviewer List" tabindex="0" title="Update Reviewer List"
onclick="punt(<%#Container.ItemIndex%>,
'<%#HttpUtility.HtmlEncode((string)DataBinder.Eval(Container.DataItem, "Name"))%>',
'Update');
return false;" />
A:
You can't combine a script include and inline javascript.
<script src="../Scripts/jquery-1.9.1.min.js" type="text/javascript"></script>
<script>
function punt(rowIndex, Name, actionType) {
alert("hello");
}
</script>
| {
"pile_set_name": "StackExchange"
} |
Q:
Count different enteries by year in SQL
I have the following table shown in the picture, i want to count all the new codes occuring by years e.g Year 1972 : New Code: 4, 30 times, New Code: 5 60 times
Year 1857 New Code: 4, 30 times, New Code: 5, 60 times.
Preferably save the result in column 1.
A:
Could be as simple as:
select Year
, NewCode
, count(*)
from YourTable
group by
Year
, NewCode
If you want to update the table:
update YourTable yt1
set Column1 =
(
select count(*)
from YourTable yt2
where yt1.Year = yt2.Year
and yt1.NewCode = yt2.NewCode
)
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to propose tag wiki
I am trying to create a tag wiki for question-of-the-week, but every time I click on Propose Tag Wiki it just reloads the page and doesn't let me make the tag wiki.
A:
We're not currently allowing tag wikis to be edited on metas, as we plan to push out the shared meta tag wikis from meta.so.
This may change in the future.
| {
"pile_set_name": "StackExchange"
} |
Q:
Receive multiple columns from one sql request in PHP
I am working on a friend list function and I can't figure out how to correctly receive the values.
My code looks like this:
$getuid = $mysqli->prepare("SELECT `uid` FROM `users` WHERE name = ? OR name = ?");
$getuid->bind_param("ss", $user, $friend);
$getuid->execute();
$getuid->bind_result($uid);
$getuid->fetch();
$getuid->close();
$resetpass = $mysqli->prepare("INSERT INTO `friendlist` SET `friend1`=?, `friend2`=?, `accept`=0");
$resetpass->bind_param("ss", $uid[0], $uid[1]);
With the first query I get exactly two uid values back. I want to use them in the second query. It seems like bind_result is not working, neither as array nor when using two values in bind_result. How can I do this using mysqli. I can't use get_result because I'm on PHP 5.2 .
Anyone able to help me?
A:
Okay I finally understood the concept of fetch.
In order to receive all the values I have to retrieve them in a while-loop.
Here is the solution:
$getuid = $mysqli->prepare("SELECT `uid` FROM `users` WHERE name = ? OR name = ?");
$getuid->bind_param("ss", $user, $friend);
$arra = array();
$getuid->execute();
$getuid->bind_result($uid);
while ($getuid->fetch()) {
$arra[] = $uid;
}
Now I can call the array values using $arra[0] and $arra[1]
| {
"pile_set_name": "StackExchange"
} |
Q:
VB.net ConnectionStrings object reference
when connecting i get an error:
object reference not set to instance of an object
Form1
Dim c As String = System.Configuration.ConfigurationManager.ConnectionStrings("CON").ConnectionString
Dim con As New MySqlConnection(c)
App.config
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<connectionStrings>
<add name="CON" providerName="MySql.Data.MySqlClient" connectionString="Server=localhost;Database=DATABASE_;Uid=user;Pwd=pass" />
</connectionStrings>
</configuration>
I have no idea how to fix this issue...
This works
Dim con As New MySqlConnection("Server=localhost;Database=DATABASE_;Uid=user;Pwd=pass")
Ultimately i am trying to protect my connection string.
Debug output:
A first chance exception of type 'System.NullReferenceException' occurred in Loader.exe
This is saying that System.Configuration.ConfigurationManager.ConnectionStrings("CON").ConnectionString is returning null
A:
You've differents names in app.config and code.
Imports System.Configuration.ConfigurationManager
Dim cs As String = ConnectionStrings("CON").ConnectionString
EDIT
The problem was a bad app.config setup file. Please read the comments.
| {
"pile_set_name": "StackExchange"
} |
Q:
Perl скрипт и загрузка страницы
Я хотел бы спросить, как можно написать код веб странички на html чтобы одновременно с загрузкой страницы запускался скрипт?
A:
Например сделать вызов скрипта загрузкой картинки <img src="http://domain.ru/scrip.pl"/> или как вызов яваскрипта <script src="http://domain.ru/scrip.pl"></script>, или на худой конец как iframe <iframe src="http://domain.ru/scrip.pl"></iframe>, поля чтобы сделать iframe невидимым - гуглите, я не помню.
| {
"pile_set_name": "StackExchange"
} |
Q:
Audio recording using ALSA in PCM format
I am working on audio capturing using ALSA in linux platform.
I am able to capture audio using below code where I am passing "default" device into argument, and it will dump audio data into in.pcm file.
However, when I tried to play in.pcm file, I hear only noise. I am trying to play audio using below command:
ffplay -autoexit -f f32le -ac 1 -ar 44100 in.pcm
Code:
#include <stdio.h>
#include <stdlib.h>
#include <alsa/asoundlib.h>
main (int argc, char *argv[])
{
int i;
int err;
char *buffer;
int buffer_frames = 128;
unsigned int rate = 44100;
snd_pcm_t *capture_handle;
snd_pcm_hw_params_t *hw_params;
snd_pcm_format_t format = SND_PCM_FORMAT_FLOAT;
// snd_pcm_format_t format = SND_PCM_FORMAT_S16_LE;
if ((err = snd_pcm_open (&capture_handle, argv[1], SND_PCM_STREAM_CAPTURE, 0)) < 0) {
fprintf (stderr, "cannot open audio device %s (%s)\n",
argv[1],
snd_strerror (err));
exit (1);
}
fprintf(stdout, "audio interface opened\n");
if ((err = snd_pcm_hw_params_malloc (&hw_params)) < 0) {
fprintf (stderr, "cannot allocate hardware parameter structure (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params allocated\n");
if ((err = snd_pcm_hw_params_any (capture_handle, hw_params)) < 0) {
fprintf (stderr, "cannot initialize hardware parameter structure (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params initialized\n");
if ((err = snd_pcm_hw_params_set_access (capture_handle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED)) < 0) {
fprintf (stderr, "cannot set access type (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params access setted\n");
if ((err = snd_pcm_hw_params_set_format (capture_handle, hw_params, format)) < 0) {
fprintf (stderr, "cannot set sample format (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params format setted\n");
if ((err = snd_pcm_hw_params_set_rate_near (capture_handle, hw_params, &rate, 0)) < 0) {
fprintf (stderr, "cannot set sample rate (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params rate setted\n");
if ((err = snd_pcm_hw_params_set_channels (capture_handle, hw_params, 2)) < 0) {
fprintf (stderr, "cannot set channel count (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params channels setted\n");
if ((err = snd_pcm_hw_params (capture_handle, hw_params)) < 0) {
fprintf (stderr, "cannot set parameters (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params setted\n");
snd_pcm_hw_params_free (hw_params);
fprintf(stdout, "hw_params freed\n");
if ((err = snd_pcm_prepare (capture_handle)) < 0) {
fprintf (stderr, "cannot prepare audio interface for use (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "audio interface prepared\n");
buffer = malloc(128 * snd_pcm_format_width(format) / 8 * 2);
fprintf(stdout, "buffer allocated\n");
FILE *fp = fopen("in.pcm", "a+");
//for (i = 0; i < 10; ++i) {
i = 0;
while(++i) {
snd_pcm_wait(capture_handle, 1000);
if ((err = snd_pcm_readi (capture_handle, buffer, buffer_frames)) != buffer_frames) {
fprintf (stderr, "read from audio interface failed (%s)\n",
snd_strerror (err));
exit (1);
}
fwrite(buffer, 1, buffer_frames, fp);
fprintf(stdout, "read %d done\n", i);
}
fclose(fp);
free(buffer);
fprintf(stdout, "buffer freed\n");
snd_pcm_close (capture_handle);
fprintf(stdout, "audio interface closed\n");
exit (0);
}
Can some one tell me what is issue ?
Thanks in advance.
A:
I fixed this issue by changing buffer size of write function, attaching my working code:
#include <stdio.h>
#include <stdlib.h>
#include <alsa/asoundlib.h>
main (int argc, char *argv[])
{
int i;
int err;
char *buffer;
int buffer_frames = 128;
unsigned int rate = 44100;
snd_pcm_t *capture_handle;
snd_pcm_hw_params_t *hw_params;
snd_pcm_format_t format = SND_PCM_FORMAT_FLOAT;
// snd_pcm_format_t format = SND_PCM_FORMAT_S16_LE;
if ((err = snd_pcm_open (&capture_handle, argv[1], SND_PCM_STREAM_CAPTURE, 0)) < 0) {
fprintf (stderr, "cannot open audio device %s (%s)\n",
argv[1],
snd_strerror (err));
exit (1);
}
fprintf(stdout, "audio interface opened\n");
if ((err = snd_pcm_hw_params_malloc (&hw_params)) < 0) {
fprintf (stderr, "cannot allocate hardware parameter structure (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params allocated\n");
if ((err = snd_pcm_hw_params_any (capture_handle, hw_params)) < 0) {
fprintf (stderr, "cannot initialize hardware parameter structure (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params initialized\n");
if ((err = snd_pcm_hw_params_set_access (capture_handle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED)) < 0) {
fprintf (stderr, "cannot set access type (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params access setted\n");
if ((err = snd_pcm_hw_params_set_format (capture_handle, hw_params, format)) < 0) {
fprintf (stderr, "cannot set sample format (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params format setted\n");
if ((err = snd_pcm_hw_params_set_rate_near (capture_handle, hw_params, &rate, 0)) < 0) {
fprintf (stderr, "cannot set sample rate (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params rate setted\n");
if ((err = snd_pcm_hw_params_set_channels (capture_handle, hw_params, 2)) < 0) {
fprintf (stderr, "cannot set channel count (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params channels setted\n");
if ((err = snd_pcm_hw_params (capture_handle, hw_params)) < 0) {
fprintf (stderr, "cannot set parameters (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "hw_params setted\n");
snd_pcm_hw_params_free (hw_params);
fprintf(stdout, "hw_params freed\n");
if ((err = snd_pcm_prepare (capture_handle)) < 0) {
fprintf (stderr, "cannot prepare audio interface for use (%s)\n",
snd_strerror (err));
exit (1);
}
fprintf(stdout, "audio interface prepared\n");
buffer = malloc(128 * snd_pcm_format_width(format) / 8 * 2);
fprintf(stdout, "buffer allocated %d\n", snd_pcm_format_width(format) / 8 * 2);
int fd = open("in.pcm", O_CREAT | O_RDWR, 0666);
//for (i = 0; i < 10; ++i) {
i = 0;
while(++i) {
//snd_pcm_wait(capture_handle, 1000);
if ((err = snd_pcm_readi (capture_handle, buffer, buffer_frames)) != buffer_frames) {
fprintf (stderr, "read from audio interface failed (%s)\n",
snd_strerror (err));
exit (1);
}
write(fd, buffer, 128 * snd_pcm_format_width(format) / 8 * 2);
fprintf(stdout, "read %d done\n", i);
}
close(fd);
free(buffer);
fprintf(stdout, "buffer freed\n");
snd_pcm_close (capture_handle);
fprintf(stdout, "audio interface closed\n");
exit (0);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Outlook macro runs through 250 iterations before failing with error
Description:
I have an Outlook macro that loops through selected emails in a folder and writes down some info to a .csv file. It works perfectly up until 250 before failing. Here is some of the code:
Open strSaveAsFilename For Append As #1
CountVar = 0
For Each objItem In Application.ActiveExplorer.Selection
DoEvents
If objItem.VotingResponse <> "" Then
CountVar = CountVar + 1
Debug.Print " " & CountVar & ". " & objItem.SenderName
Print #1, & objItem.SenderName & "," & objItem.VotingResponse
Else
CountVar = CountVar + 1
Debug.Print " " & CountVar & ". " & "Moving email from: " & Chr(34) & objItem.SenderName & Chr(34) & " to: Special Cases sub-folder"
objItem.Move CurrentFolderVar.Folders("Special Cases")
End If
Next
Close #1
Problem
After this code runs through 250 emails, the following screenshot pops up:
http://i.stack.imgur.com/yt9P8.jpg
I've tried adding a "wait" function to give the server a rest so that I'm not querying it so quickly, but I get the same error at the same point.
A:
Thanks to @76mel, for his answer to another question which I referenced heavily. I found out that it is a built-in limitation in Outlook (source) that you can't open more than 250 items and Outlook keeps them all in memory until the macro ends no matter what. The workaround, instead of looping through each item in selection:
For Each objItem In Application.ActiveExplorer.Selection
you can loop through the parent folder. I thought I could do something like this:
For Each objItem In oFolder.Items
but, it turns out that when you delete or move an email, it shifts the list up one, so it will skip emails. The best way to iterate through a folder that I found in another answer is to do this:
For i = oFolder.Items.Count To 1 Step -1 'Iterates from the end backwards
Set objItem = oFolder.Items(i)
Here is the whole code, which prompts for a folder to choose to parse, creates sub-directories in that folder for "Out of Office" replies as well as "Special Cases" where it puts all emails that begin with "RE:"
Sub SaveItemsToExcel()
Debug.Print "Begin SaveItemsToExcel"
Dim oNameSpace As Outlook.NameSpace
Set oNameSpace = Application.GetNamespace("MAPI")
Dim oFolder As Outlook.MAPIFolder
Set oFolder = oNameSpace.PickFolder
Dim IsFolderSpecialCase As Boolean
Dim IsFolderOutofOffice As Boolean
IsFolderSpecialCase = False
IsFolderOutofOffice = False
'If they don't check a folder, exit.
If oFolder Is Nothing Then
GoTo ErrorHandlerExit
ElseIf oFolder.DefaultItemType <> olMailItem Then 'Make sure folder is not empty
MsgBox "Folder does not contain mail messages"
GoTo ErrorHandlerExit
End If
'Checks to see if Special Cases Folder and Out of Office folders exists. If not, create them
For i = 1 To oFolder.Folders.Count
If oFolder.Folders.Item(i).name = "Special Cases" Then IsFolderSpecialCase = True
If oFolder.Folders.Item(i).name = "Out of Office" Then IsFolderOutofOffice = True
Next
If Not IsFolderSpecialCase Then oFolder.Folders.Add ("Special Cases")
If Not IsFolderOutofOffice Then oFolder.Folders.Add ("Out of Office")
'Asks user for name and location to save the export
objOutputFile = CreateObject("Excel.application").GetSaveAsFilename(InitialFileName:="TestExport" & Format(Now, "_yyyymmdd"), fileFilter:="Outlook Message (*.csv), *.csv", Title:="Export data to:")
If objOutputFile = False Then Exit Sub
Debug.Print " Will save to: " & objOutputFile & Chr(10)
'Overwrite outputfile, with new headers.
Open objOutputFile For Output As #1
Print #1, "User ID,Last Name,First Name,Company Name,Subject,Vote Response,Recived"
ProcessFolderItems oFolder, objOutputFile
Close #1
Set oFolder = Nothing
Set oNameSpace = Nothing
Set objOutputFile = Nothing
Set objFS = Nothing
MsgBox "All complete! Emails requiring attention are in the " & Chr(34) & "Special Cases" & Chr(34) & " subdirectory."
Debug.Print "End SaveItemsToExcel."
Exit Sub
ErrorHandlerExit:
Debug.Print "Error in code."
End Sub
Sub ProcessFolderItems(oParentFolder, ByRef objOutputFile)
Dim oCount As Integer
Dim oFolder As Outlook.MAPIFolder
Dim MessageVar As String
oCount = oParentFolder.Items.Count
Dim CountVar As Integer
Dim objItem As Outlook.MailItem
CountVar = 0
For i = oParentFolder.Items.Count To 1 Step -1 'Iterates from the end backwards
Set objItem = oParentFolder.Items(i)
DoEvents
If objItem.Class = olMail Then
If objItem.VotingResponse <> "" Then
CountVar = CountVar + 1
Debug.Print " " & CountVar & ". " & GetUsername(objItem.SenderName, objItem.SenderEmailAddress) & "," & objItem.SenderName & "," & GetCompany(objItem.SenderName) & "," & Replace(objItem.Subject, ",", "") & "," & objItem.VotingResponse & "," & objItem.ReceivedTime
Print #1, GetUsername(objItem.SenderName, objItem.SenderEmailAddress) & "," & objItem.SenderName & "," & GetCompany(objItem.SenderName) & "," & Replace(objItem.Subject, ",", "") & "," & objItem.VotingResponse & "," & objItem.ReceivedTime
ElseIf objItem.Subject Like "*Out of Office*" Then
CountVar = CountVar + 1
Debug.Print " " & CountVar & ". " & "Moving email from: " & Chr(34) & objItem.SenderName & Chr(34) & " to the, " & Chr(34) & "Out of Office" & Chr(34) & " sub-folder"
objItem.Move oParentFolder.Folders("Out of Office")
Else
CountVar = CountVar + 1
Debug.Print " " & CountVar & ". " & "Moving email from: " & Chr(34) & objItem.SenderName & Chr(34) & " to the, " & Chr(34) & "Special Cases" & Chr(34) & " sub-folder"
objItem.Move oParentFolder.Folders("Special Cases")
End If
End If
Next i
Set objItem = Nothing
End Sub
Function GetUsername(SenderNameVar As String, SenderEmailVar As String) As String
On Error Resume Next
GetUsername = ""
GetUsername = CreateObject("Outlook.Application").CreateItem(olMailItem).Recipients.Add(SenderNameVar).AddressEntry.GetExchangeUser.Alias
If GetUsername = "" Then GetUsername = Mid(SenderEmailVar, InStrRev(SenderEmailVar, "=", -1) + 1)
End Function
Function GetCompany(SenderNameVar)
On Error Resume Next
GetCompany = ""
GetCompany = CreateObject("Outlook.Application").CreateItem(olMailItem).Recipients.Add(SenderNameVar).AddressEntry.GetExchangeUser.CompanyName
End Function
| {
"pile_set_name": "StackExchange"
} |
Q:
How to access the crystalline dimension maps?
I have bought the Dungeon Defenders Collection and have gone to go to the crystalline dimension but it does not appear in my tavern. Help?
A:
To gain access to the Crystalline Dimension, you must first beat each of the Lost Eternia Shards maps:
Mistymire Forest
Moraggo Desert Town
Aquanos
Sky City
This will then spawn the portal in your Tavern. It should be noted that you can't just beat the LES maps on Easy and then jump to Nightmare Hardcore Crystalline Dimension. CD is unlocked at the lowest common difficulty setting that you have completed the four LES maps on, so if you want to play CD on NM HC difficulty, you must complete all of the LES maps on NM HC beforehand.
| {
"pile_set_name": "StackExchange"
} |
Q:
Propositional Logic Tautology Proof
I have question about a proposition that I need to prove is a tautology:
$((p \rightarrow q) \wedge (q \rightarrow r)) \rightarrow (p \rightarrow r)$
I have tried negating the first large bracket, but after a few steps I'm stuck. Should I show that the first 2 brackets are the same as $(p \rightarrow r)$ and therefore it is a tautology?
Please help me.
A:
Here is an answer through truth tables,
This is how I did it:
Fill in all the variables first.
Do the first implication from p and q
Do the second implication from q and r
Do the conjunction from the first and second implications
Do the implication to the furthest right, from p and r
Then do the remaining implication from the conjunction and the implication above.
The end result is that the proposition is true in all possible worlds, a tautology.
| {
"pile_set_name": "StackExchange"
} |
Q:
In WCF, what's the relationship between service, host, and client?
Can anyone explain the relationship between service, host, and client in the simplest ways?
A:
Let me explain with some analogy. On very hot day you may want to eat ice-cream and cool off yourself. So you go to ice-cream parlour and gives your order to lady at counter. Lady serves you with ice-cream you ordered. Now lets see how this translates to host, service and client.
Service:
Selling ice-cream is a service in this context. Ice-cream parlor may be providing other services as well.
In WCF, "service" is function which performs certain activity and this function can be called remotely across boundaries e.g. SellIceCream or AddProduct or CalculateTax
Client:
Client is one who avails or consumes service. In our ice-cream example you (customer) are consuming service i.e. buying ice-cream, which is a service provided by Ice-Cream parlour owner.
Host:
Ice-Cream parlour owner cannot sell ice-cream in open space. He needs covered place where owner can arrange various equipment, storage units, cash counter etc. It also helps owner to serve customers in better and efficient way. In WCF terminology this Ice-Cream Parlour translates into "Host". Host is where service lives. Host manages lifetime of service.
| {
"pile_set_name": "StackExchange"
} |
Q:
append nested lists
I want to append two lists say [1,2] and [3,4] to a single list as ,
[[1,2],[3,4]]. How could this be achieved in prolog. I always get it as [1,2,3,4].
Thanks
A:
glue(A, B, [A, B]).
There's not much to say about this one!
Usage is
?- glue([1, 2], [3, 4], R).
R = [[1, 2], [3, 4]].
But really, you can hardcode it instead of wrapping it in a predicate.
| {
"pile_set_name": "StackExchange"
} |
Q:
minimax with Iterative Deepening search and alpha-beta pruning stopping time of 5 secconds
I have to made a kalaha AI which is using alpha beta pruning with Iterative deepening and a time limit of 5 secconds....the alpha beta pruning function works fine and win all the time but the alpha beta pruning with iterative deepening does not work fine...
when it is going to finish, the depthCount is going really deep but shouldn't(like 15000).... I added an image at the end to show this problem
Can sonmeone help me with that?
public int minimaxIDAlphaBeta(GameState currentBoard, int maxPlayer, boolean isMax, boolean isMin, int alpha, int beta) {
int bestMove = 0;
int depthCount = 1;
int value = 0;
Integer maxValue = Integer.MIN_VALUE;
start_time = 0;
time_exceeded = false;
elapsed_time = 0;
if (!currentBoard.gameEnded()) {
start_time = System.currentTimeMillis();
while (!time_exceeded) {
elapsed_time = System.currentTimeMillis() - start_time;
// GameState newBoard = currentBoard.clone();
if (elapsed_time > timeLimit) {
//System.out.println("time out: " + elapsedTime / 1000F + ", and depth count: " + depthCount);
time_exceeded = true;
break;
} else {
// if (newBoard.gameEnded()) {
// return bestMove;
// }
for (int i = 2; i <= 25 ; i++) //for (int i = 1; elapsed_time < timeLimit ; i++)
{
value = MinimaxIterativeDeepeningAlphaBeta(currentBoard, 1, maxPlayer, isMax, isMin, i, alpha, beta, start_time, time_exceeded);
if (value > maxValue) {
bestMove = value;
}
if (elapsed_time >= timeLimit) {
System.out.println("depth count: " + i);
System.out.println("best move: " + bestMove + ", elapsed time: " + elapsed_time / 1000F);
break;
}
}
}
}
}
return bestMove;
}
[![enter image description here][1]][1]
public int MinimaxIterativeDeepeningAlphaBeta(GameState currentBoard, int currentDepth, int maxPlayer, boolean isMax, boolean isMin, int maxDepth, int alpha, int beta,long start_time,boolean exceeded_time) {
int depth = maxDepth;
int value = 0;
Integer maxValue = Integer.MIN_VALUE;
Integer minValue = Integer.MAX_VALUE;
int bestMove = 0;
//get elapsed time in milliseconds
elapsed_time = System.currentTimeMillis() - start_time;
//if elapsed time is larger than maximum time limit, then stop searching
if (elapsed_time > timeLimit) {
time_exceeded = true;
}
//if time is not exceeded 5 sec
if (!time_exceeded) {
//if the game is ended or we hit a terminal node, return the maxPlayer score
if (currentBoard.gameEnded() || currentDepth == depth || time_exceeded == true) {
if (maxPlayer == 1) {
return currentBoard.getScore(1) - currentBoard.getScore(2);
} else {
return currentBoard.getScore(2) - currentBoard.getScore(1);
}
}
//check to see if it's max turn
if (isMax) {
for (int i = 1; i < 7; i++) {
//check to see if move is possible or not
if (currentBoard.moveIsPossible(i)) {
//copy the current board in each iteration
GameState newBoard = currentBoard.clone();
newBoard.makeMove(i);
//check to see if the next player is max again or not...if it's next turn is max again set isMax true and isMin false...
if (newBoard.getNextPlayer() == maxPlayer) {
isMax = true;
isMin = false;
} else {
isMax = false;
isMin = true;
}
if (isMax) {
//if it's max turn it will excute this recursive function
value = MinimaxIterativeDeepeningAlphaBeta(newBoard, currentDepth + 1, maxPlayer, isMax, isMin, maxDepth, alpha, beta,start_time,exceeded_time);
} else {
//if it's min turn it will excute this recursive function
value = MinimaxIterativeDeepeningAlphaBeta(newBoard, currentDepth + 1, maxPlayer, isMax, isMin, maxDepth, alpha, beta,start_time,exceeded_time);
}
//if the value is greater than the max value, it will store the value in max value and the i as the best move
if (value > maxValue) {
maxValue = value;
bestMove = i;
}
//if maximum value is larger than alpha value, then store maximum value as alpha value
if (maxValue > alpha) {
alpha = maxValue;
}
//if the alpha value is larger than beta value, then stop the iteration
if (beta <= alpha) {
break;
}
}
}
//as long as the depth is greater than 1 we want to calculate the best value and return, but when the current depth is 1 we want to return the best move instead of best value
if (currentDepth != 1) {
bestMove = maxValue;
}
} else { //if it is min turn it will go through the else
for (int i = 1; i < 7; i++) {
if (currentBoard.moveIsPossible(i)) {
//copy the current board in each iteration
GameState newBoard = currentBoard.clone();
newBoard.makeMove(i);
//check to see if the next player is min again or not...if it's next turn is min again set isMin true and isMax false...
if (newBoard.getNextPlayer() != maxPlayer) {
isMax = false;
isMin = true;
} else {
isMax = true;
isMin = false;
}
if (isMin) {
//if it's min turn it will excute this recursive function
value = MinimaxIterativeDeepeningAlphaBeta(newBoard, currentDepth + 1, maxPlayer, isMax, isMin, maxDepth, alpha, beta,start_time,exceeded_time);
} else {
//if it's max turn it will excute this recursive function
value = MinimaxIterativeDeepeningAlphaBeta(newBoard, currentDepth + 1, maxPlayer, isMax, isMin, maxDepth, alpha, beta,start_time,exceeded_time);
}
//if the value is less than the min value, it will store the value in min value and the i as the best move
if (value < minValue) {
minValue = value;
bestMove = i;
}
//if minimum value is smaller than beta value, then store minimum value as beta value
if (minValue < beta) {
beta = minValue;
}
//if the beta value is smaller than alpha value, then stop the iteration
if (beta <= alpha) {
break;
}
}
}
//as long as the depth is greater than 1 we want to calculate the best value and return, but when the current depth is 1 we want to return the best move instead of best value
if (currentDepth != 1) {
bestMove = minValue;
}
}
}
//when the current depth equals to 1 it will return the best move
return bestMove;
}
A:
Problem
Consider the iterative deepening loop in the first function minimaxIDAlphaBeta:
if (currentBoard.getWinner() > -1) {
return bestMove;
}
for (int i = 1; currentBoard.getWinner() == -1; i++) //for (int i = 1; elapsed_time < timeLimit ; i++)
{
value = MinimaxIterativeDeepeningAlphaBeta(currentBoard, 1, maxPlayer, isMax, isMin, i, alpha, beta, start_time, time_exceeded);
// code omitted for clarity
if (elapsed_time >= 4999) {
// code omitted for clarity
break;
}
}
And consider the alpha beta search of the second function:
public int MinimaxIterativeDeepeningAlphaBeta(GameState currentBoard, int currentDepth, int maxPlayer, boolean isMax, boolean isMin, int maxDepth, int alpha, int beta,long start_time,boolean exceeded_time) {
// code omitted for clarity
for (int i = 1; i < 7; i++) {
//check to see if move is possible or not
if (currentBoard.moveIsPossible(i)) {
//copy the current board in each iteration
GameState newBoard = currentBoard.clone();
newBoard.makeMove(i);
// code omitted for clarity
}
}
// code omitted for clarity
}
Each move is made on a clone of the game state, which is correct since a move must be reversed before exiting the alpha-beta function.
However, in the iterative deepening loop only the original game state is asked if the game is over (and the game is never over for the original game state!).
So the loop will always run until the time limit is reached.
Possible Solution
Usually two abort criteria are used for Iterative Deepening:
Maximal depth reached (this is missing) and
Timeout
It would be wrong to terminate the search as soon as you found a terminal state because you might find a better move with a greater search depth limit.
| {
"pile_set_name": "StackExchange"
} |
Q:
Proving a language is not regular using pumping lemma
I had an exam today and the professor gave us the following problem:
Let $L = \{a^nb^m : n|2m \}$. Prove that $L$ is not regular.
Ok this sounds easy. Here is my solution: Assume opposite -- $L$ is regular. Then by the pumping lemma there exist decomposition $xyz$ of string $s \in L$ such that
$|y| \ge 1$, $|xy| \le p$, where $p$ is the pumping lemma length and $xy^iz \in L$ for all $i \ge 0$.
Setting $ s = а^pb^{2p}$, clearly $s$ is in $L$. Then $s = xyz$, and from the condition $|xy| \le p$ it follows that $y$-part consist only of $a's$.
Here is my problem: I say -- let $y = a$, choose $i = p+1$, then it should $xy^{p+1}z= a^{2p+1}b^p \in L$ -- a contradiction, so $L$ is not regular.
Is it my proof correct?
Many thanks to all,
Ivan
A:
You've made assumptions about the length of $x,y,z$ if you know exactly what $xy^{p+1}z$ is.
You've shown that $x=a^k, y=a^\ell, z=a^{p-k-\ell}b^{2p}$ where $p\geq k+\ell$, $\ell>0$. Then can $xy^2z$ be in $L$?
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails 4: Error when installing tiny_tds gem?
I have developed a rails web project with Ruby v2 and Rails v4. It was perfectly working in my current system and when i tried to run this project in another Linux machine (Ubuntu 12.4 ) i am getting a Tidy_tds error .
This error is showing at the time of bundle install ,
Error Details below
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
/home/action/.rvm/rubies/ruby-2.1.1/bin/ruby extconf.rb
checking for iconv_open() in iconv.h... yes
checking for sybfront.h... no
-----
freetds is missing.
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/home/action/.rvm/rubies/ruby-2.1.1/bin/ruby
--enable-lookup
--disable-lookup
--with-iconv-dir
--without-iconv-dir
--with-iconv-include
--without-iconv-include=${iconv-dir}/include
--with-iconv-lib
--without-iconv-lib=${iconv-dir}/lib
--with-freetds-dir
--without-freetds-dir
--with-freetds-include
--without-freetds-include=${freetds-dir}/include
--with-freetds-lib
--without-freetds-lib=${freetds-dir}/lib
extconf failed, exit code 1
Gem files will remain installed in /home/action/.rvm/gems/ruby-2.1.1/gems/tiny_tds-0.6.1 for inspection.
Results logged to /home/action/.rvm/gems/ruby-2.1.1/extensions/x86_64-linux/2.1.0/tiny_tds-0.6.1/gem_make.out
An error occurred while installing tiny_tds (0.6.1), and Bundler cannot continue.
Make sure that `gem install tiny_tds -v '0.6.1'` succeeds before bundling.
I have also tried installing tiny_tds seperatly, but still i am getting the same issue
gem install tiny_tds -v '0.6.1'
My gem file,
source 'https://rubygems.org'
# Bundle edge Rails instead: gem 'rails', github: 'rails/rails'
gem 'rails', '4.0.2'
# Use SCSS for stylesheets
gem 'sass-rails', '~> 4.0.0'
gem 'tiny_tds'
# Use Uglifier as compressor for JavaScript assets
gem 'uglifier', '>= 1.3.0'
# Use CoffeeScript for .js.coffee assets and views
gem 'coffee-rails', '~> 4.0.0'
# See https://github.com/sstephenson/execjs#readme for more supported runtimes
# gem 'therubyracer', platforms: :ruby
# Use jquery as the JavaScript library
gem 'jquery-rails'
# Turbolinks makes following links in your web application faster. Read more: https://github.com/rails/turbolinks
gem 'turbolinks'
gem 'jquery-ui-rails' # jquery ui
# Build JSON APIs with ease. Read more: https://github.com/rails/jbuilder
gem 'jbuilder', '~> 1.2'
group :doc do
# bundle exec rake doc:rails generates the API under doc/api.
gem 'sdoc', require: false
end
# Use sqlserver as the database for Active Record
gem 'activerecord-sqlserver-adapter', :git => 'https://github.com/nextgearcapital/activerecord-sqlserver-adapter.git'
gem "therubyracer"
gem "less-rails"
gem "twitter-bootstrap-rails"
gem 'bootstrap-datepicker-rails'
gem 'will_paginate'
gem 'sqlite3'
gem 'formtastic'
Why is this error appearing and how to solve this issue.
Any help is appreciated
A:
Looks like you don't have freetds installed on this machine:
sudo apt-get install freetds-dev
If you see the details of the freetds-dev package you will see it has the missing file sybfront.h
There is no gem that I know for freetds and building the tiny_tds gem requires it to compile. However it should be possible for you build your own version of the gem using MiniPortile.
Rather than using the normal gem install mechanism you need to clone the tiny_tds from GitHub and then build a native gem for your environment. This process will include downloading a specific version of freetds which is used to compile the gem against.
This should get you round the problem of not being able to install freetds-dev package, but does have the disadvantage that if tiny_tds gem is updated in the future you will need to repeat this process each time - you can't simply take advantage of bundle update.
The steps you need to follow are detailed here.
A:
For MacOS you need additional lib installed.
brew install freetds
A:
This worked for me in Ubuntu 17.04
Install wget to download packages from remote URL
$ sudo apt-get install wget
Install dependencies
$ sudo apt-get install build-essential
$ sudo apt-get install libc6-dev
Download remote file
$ wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-1.00.21.tar.gz
Unzip
$ tar -xzf freetds-1.00.21.tar.gz
Configure and Run Test for the package in your system
$ cd freetds-1.00.21
$ sudo ./configure --prefix=/usr/local --with-tdsver=7.3
$ sudo make
Now, Install
$ sudo make install
Now install the gem
$ gem install tiny_tds -v '2.0.0'
If still, you cannot install the gem tiny_tds then try using sudo
$ sudo gem install tiny_tds -v '2.0.0'
Output
Building native extensions. This could take a while...
Successfully installed tiny_tds-2.0.0
Parsing documentation for tiny_tds-2.0.0
Installing ri documentation for tiny_tds-2.0.0
Done installing documentation for tiny_tds after 1 seconds
1 gem installed
| {
"pile_set_name": "StackExchange"
} |
Q:
How I can get all the text of a tweet that's been retweeted and extends 140 chars?
I have started learning Twitter4j API and have got all credentials and tokens from Twitter to use it. I am using twitter4j API version 2.2.5.
I am able to get my own timeline using a simple java program and print it on console. I am able to get all the tweets and retweets done by me using the code below.
List<Status> statuses;
statuses = twitter.getUserTimeline();
for (Status status1 : statuses){
System.out.println(status1.getText());
}
The problem is I retweeted one tweet which consists of 140 characters, so after the retweet it becomes more than 140 characters. It is not printing the whole tweet in the console, instead it is printing ... at the end.
RT @xxxxxx: The ************************************ , but the pai.....
How I can get the whole tweet?
A:
Based on Twitter4J's Status Javadoc, I would try this:
status1.getRetweetedStatus().getText()
| {
"pile_set_name": "StackExchange"
} |
Q:
Pre-biased Bipolar Transistor (BJT) Tradeoffs
If you go to the Discrete Semiconductor category on Mouser.com, there is a section for "Bipolar Transistors - Pre-Biased".
Is this as simple as including a resistor in series with the base?
Are there other nuances about this type of package?
A:
Pre-biased (also "resistor-equipped" or "digital") transistors indeed have a plain resistor in front of the base. Most also have a second resistor between base and emitter. Both resistors are used in exactly the same way and for the same purposes as external, discrete resistors.
Creating resistors on a silion die is harder, so they have looser tolerances, often ± 30 %.
(For digital switching, where all you care about is that the transistor is saturated, this does not matter.)
Otherwise, there is no difference.
A:
As CL. said they have one or two built-in resistors.
In addition to saving board space and money they also idiot proof if you use them within reason.
Often the base resistor is dimensioned that even if you turn the transistor on the collector won't conduct more current than the part can handle. With typical digital circuits that stay below 5V these parts will also not disipate more power that they can handle, even if you accidentely switch Vcc directly to ground without any load.
| {
"pile_set_name": "StackExchange"
} |
Q:
Diode in positive feedback?
I have a colleague who is off work for the next 2 weeks and has asked me to finish off one of his schematic designs. I have a list of operations it needs to be able to perform, seems simple enough.
I got round to making a start on it today, and after having a browse through what he has already done, I notice something that I haven't seen before.
Here is the basics of what it looks like:
simulate this circuit – Schematic created using CircuitLab
I have not seen an op amp circuit which uses a diode in the feedback circuit in this way. I recognise it is a window comparator, and this part of the circuit is used to detect a voltage level and turn on an LED if it goes above or below a threshold. I just can't work out what the point of the resistor and diode is in the feedback.
My go-to op amp configuration PDF is one from texas instruments (LINK) and I couldn't find one like this. So can anyone tell me what the function is of this feedback circuit?
NOTE: I have labelled things as V1, V2, OUT etc as they should be irrelevant to the circuit, V1 and V2 are measuring the input voltage, Vref is the threshold, and the output toggles LEDs
EDIT: I have updated the schematic to include the resistors that Andy aka mentioned, the values of resistance are what was on the schematic at the time, they may be incorrect, I am unsure, as the schematic is not finished.
A:
It looks like the intention is to provide hysteresis. For instance (and assuming that V1 and Vref have series resistance that is not shown on the OP's diagram), if V1 drops below Vref then OUT will drop to 0 volts and R1/D2 will further enhance the effect of V1 lowering below Vref. This is called hysteresis and is used to avoid a situation where V1 is hovering close to the value of Vref and causing OUT to oscillate high and low due to noise.
That is what hysteresis does - once a comparator switches it stays switched with no ambiguity.
With a diode (D2) in series with R1 and with OUT high there is NO current passing back through D2 to V1. This means that if Vref were to increase toward V1, OUT would switch low at precisely the point Vref = V1.
It's a kind of one-sided hysteresis and this is due to the diode blocking the hysteresis effect in one direction.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can you launch jQuery from SVG?
I would like to post on
http://anysite.com (any forum/blog/site/...)
a SVG IMAGE TAG remotely like this:
< img src="http://myownsite.com/myimage.svg" > (my own site)
in order to launch jQuery/javascript/ajax from it. In detail, I would like the .svg file to open a bigger div including interactive features.
Is this possible? If yes, how?
A:
Browsers don't permit SVG loaded via <img> tags to run javascript for security reasons. The security issues they are concerned with are pretty much what you're trying to do, have an image which appears to be one thing but actually isn't.
| {
"pile_set_name": "StackExchange"
} |
Q:
ESLint React JSX Closing Tag / Parsing Error
I'm trying to set up ESLint on my React project.
Installed both eslint and eslint-plugin-react locally on my project.
Also using the VSCode ESLint extension (but I tried this without this extension and I also get the same linting errors).
Below you can see my .eslintconfig.json file and dependencies versions. I've got JSX enabled.
I keep getting errors on these closing tags.
What seems to be the problem? I can't make this errors go away and it's just simple closing tags.
Thanks a lot.
A:
It seems that it was being caused by some kind of conflict between the eslint package and some of babel packages, because when I did a clean install using only eslint and react that .eslintrc.json configuration worked perfectly.
I didn't have babel-eslint installed before and I was using the default parser that comes with eslint.
To solve the problem I had to install the babel-eslint package and use it as the parser for the eslint package. Everything is working fine now and those errors are gone.
Even though the babel-eslint docs says that you don't have to use it just because you're using babel, in my situation, using it along with babel and eslint solved the problem.
New eslint config file:
| {
"pile_set_name": "StackExchange"
} |
Q:
F# implementing interfaces with different template parameters
Having this code:
type Point2D(x, y) =
member this.X with get() = x
member this.Y with get() = y
interface System.IEquatable<Point2D> with
member x.Equals point =
x.X = point.X
&& x.Y = point.Y
type Point3D(x, y, z) =
inherit Point2D(x, y)
member this.Z with get() = z
interface System.IEquatable<Point3D> with
member x.Equals point =
(x :> System.IEquatable<Point2D>).Equals point
&& x.Z = point.Z
http://take.ms/rjWlJ
I have compile-time. F# is not allow implement the same interface with different template parameter. But I want to implement strongly-typed equals in derived type. So what should I do with it?
A:
No-can-do. Which is a slightly sorry state of affairs, since F# can consume a type with such a design perfectly well when it comes from a different CLR language.
The explanation I heard was that having such a feature was not deemed important, partially because having such a class/interface-heavy design was not considered idiomatic (which I agree with).
So one option you have is implementing those types in C#.
Another more idiomatic way would be to implement them as records:
type Point2D = { x: float; y: float }
type Point3D = { x: float; y: float; z: float }
with
static member FromPoint2D (p: Point2D) = ...
member this.ToPoint2D () = ...
Perhaps add a module with a function for comparing 2d and 3d that will encapsulate the conversion for you.
| {
"pile_set_name": "StackExchange"
} |
Q:
OneApp Profile having View All access to Accounts
I am attempting to allow a group of Force.com OneApp users to be able to Modify All on a custom object that is the child in a Master-Detail relationship with Account. The security access on our Account object is rather complex, and I'm not particularly excited about making my custom object Lookup to Account and try to figure out how to keep in sync everyone with Account access being able to see the custom object. That being said, it appears I can click the checkbox for Modify All access on the child of a master-detail object in the Profile page, but I get an error telling me that the OneApp user needs to also have Read All access to Accounts (which is not a checkbox that is available -- only Read). I am wondering if there is any way around this?
EDIT: To clarify, the goal here is to have a small group of people who are only concerned with the custom object to be able to edit all of them. They don't have any interest in the parent object (a different batch of people leverages most of the other objects), but there is no issue with giving them Read access to all the accounts.
A:
If you must have independent access to the child I'm guessing the relationship has to be a lookup rather than master detail.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can an entity inherit a property in a recursive model structure?
I have a structure identical to this post here:
How do I create a recursive one-to-many relationship using core data?
It looks something like this:
cage ---------- animal 1
|
|_____ animal 2
|
|_____ animal 3 ____ animal 4
|
|__ animal 5
|
|_____ animal 6
And I have implemented my models exactly as the correct answer has done, i.e.
The problem for me, is that with this structure, only animal 1 has a non-nil property cage, but I would like ALL descendent animals to have this property cage so that i could query the cage property animal6.cage.
I've tried setting this manually but the inverse relationship causes any animal with a cage property to be a direct child of that cage, which I don't want.
Is it possible to inherit the cage property for each animal?
A:
You're using terms like "inherit" and "child" in ways that don't have meaning to Core Data. You have Cage which is related to Animal. Animal has a relationship to itself.
There's no parent/child relationship or inheritance here as far as Core Data is concerned. If one Animal is related to another, they're just two instances with a relationship. One can't inherit a value from the other because one is not the "parent" in any sense that Core Data uses. The two instances are two independent objects and they don't inherit anything any more than any two non-Core Data objects would.
Following from that, setting the cage property doesn't make an Animal a "direct child" of the cage, it just says it's related to the cage. If you want to find the cage for any arbitrary Animal without setting cage on every instance, you need to do something like (Swift-ish pseudocode):
func cage(for animal:Animal) -> Cage {
var currentAnimal = animal
var cage = currentAnimal.cage
while cage == nil && currentAnimal.parent != nil {
currentAnimal = currentAnimal.parent
cage = currentAnimal.cage
}
return cage
}
That's fine if you just want to find the cage for an animal, but you can't use it in a fetch request. If you need something you can use when fetching, you probably need to add a second relationship from Animal to Cage so that you can distinguish the "parent" animal from any others. Every Animal would have a value for one of the relationships, and the other relationship would be reserved for the parent.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.