content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Go Down
Topic: (not a ) switch case question (anymore...) (Read 4245 times) previous topic - next topic
dhenry
It is fairly simple. For example, this would be my implementation, high-level:
Code: [Select]
void led2_display(void) {
static unsigned char current_dig=0; //current digit to be displayed. 0=tenth digit, 1=sigle digit
//turn off all digits, optional
//DIG_OFF(DIG_0); DIG_OFF(DIG_1);
switch (current_dig) {
case DIG_0: DIG_OFF(DIG_1); SEG_OUT(LRAM[DIG_0]); DIG_ON(DIG_0); current_dig=DIG_1; break; //turn off the previous digit, send data to segments, and turn on the current digit, advance to the next digit
case DIG_1: DIG_OFF(DIG_0); SEG_OUT(LRAM[DIG_1]); DIG_ON(DIG_1); current_dig=DIG_0; break; //turn off the previous digit, send data to segments, and turn on the current digit, advance to the next digit
default: continue; //do nothing for invalid input
}
}
Each time led2_display() is called, it shows the next digit (from display buffer LRAM[]). If you call it fast enough, it appears to show both.
You just need to figure out how to implement DIG_ON/OFF() and SEG_OUT() based on your hardware.
Neight
Soooo, here we go, totally revised v4.o
I finally threw in the towel on making a dual 7seg display, even with some code I simply copied and pasted, it wasn't quite working right. (the code I copied was for a 4 digit display, while I only had two digits, had to mod it a little, and lost something in the translation...) it would not scroll the message properly, so I got pretty frustrated after a whole weekend spent on it, and went to radio shack and picked up a simple LCD display...
I really like this option for my project, and I think it works better anyway. Now I can always display on the screen which dice mode you are in, and I am now starting to think of ways to add more dice to the mix. that way you can roll multiple dice at the same time. probably going to use a toggle switch and some basic digitalRead arguments to tell the arduino to display 2 or maybe even 3 random results of the current dice mode. Really increase the functionality of this thing.
So far, everything is working nearly perfect. Only glitch is, when I have cycled through all the dice modes, my default D2 mode (coin flip) is called D1 instead of D2. When it first comes on, it shows D2, then D4, D6, D8, D10, D12 and D20 as you cycle through the modes. One more button push should cycle back to D2, but instead reads D1.
here is my code...
Code: [Select]
#include <LiquidCrystal.h>
LiquidCrystal lcd(2,3,4,5,6,7);
const int mButton = 9;
int mode = 0;
int oldMode = 0;
const int Tilt = 8;
int val = 0;
int oldVal = 0;
int state = 0;
int dValue[] =
{2,4,6,8,10,12,20};
long result;
void setup()
{
Serial.begin(9600);
pinMode(Tilt, INPUT);
pinMode(mButton, INPUT);
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
randomSeed(analogRead(0));
}
void loop()
{
{
mode = digitalRead(mButton);
if ((mode == HIGH) && (oldMode == LOW))
{
state++;
{
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
}
}
if (state > 6)
{
state = 0;
delay(10);
}
oldMode = mode;
}
val = digitalRead(Tilt);
if ((val == HIGH) && (oldVal == LOW))
{
result = random(1, dValue[state] + 1);
lcd.setCursor(0, 1);
lcd.print("Result = ");
lcd.print(result);
lcd.print(" ");
delay(3000);
} else
{
lcd.setCursor(0,1);
lcd.print("Give Me A Shake");
}
Serial.println(result);
}
sorry for the lack of comments, I honestly just threw this together on the fly, and it worked nearly out of the box. I haven't taken the time to comment it yet, but will before I am all done.
pretty basic code, and the LiquidCrystal library is great! Made this a snap.
if anyone could let me know why my dice mode is displaying wrong (I still get the correct expected results, no false numbers outside the dice range) and maybe point me to a fix, I would greatly appreciate it!
This has been a very twisty, educational project for me, and I cannot express how much I have appreciated everyone who took the time to try and walk me through this.
absence of proof is not proof of absence
djjoshuad
You do have some unnecessary curly braces in there (they aren't hurting anything) and your style differs a bit from mine :) but I don't see any obvious reason this should ever print "D1". Quick question - if you press that button again, does it go to D2 or D4 (or something else)?
Neight
I am having trouble pinning it down also.
if you push the button again, it goes to D4 like it is supposed to.
The only mode that doesn't display correctly, no matter how many times you cycle through, is D2.
I have tried adding and subtracting 1 at various points in the code where the display is called, but that only seems to effect my dice results, or limits my results by one.
I am honestly stumped, but for the moment it is a minor issue, and I have moved on.
When I get the rest of the functions I am trying to work out done and working correctly, I will probably go back and see if I can fix it.
If it can't be fixed, then so be it, I could have done worse in my opinions :P
Out of curiosity, do you mind explaining how you would have handled it?
I am always interested in learning other ways to do things with the Arduino. The more ways you can do one things, the more tools you have in the toolbox :)
I seem to be nutty for curly brackets, I always end up with many of them :P
absence of proof is not proof of absence
PeterH
This bit of code does things in the wrong order:
Code: [Select]
state++;
{
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
}
}
if (state > 6)
You print the new dice value before you do the 'if(state > 6)' check to reset it to zero. The '1' you're seeing on the display is just whatever happens to be in the memory location after the last dice.
A better way to do this would be like this:
Code: [Select]
// DICE_COUNT is the number of elements in the dValue array
state = (state+1) % DICE_COUNT;
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
I only provide help via the forum - please do not contact me for private consultancy.
Neight
#35
Jan 14, 2013, 01:34 pm Last Edit: Jan 14, 2013, 01:38 pm by Neight Reason: 1
This bit of code does things in the wrong order:
Code: [Select]
state++;
{
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
}
}
if (state > 6)
You print the new dice value before you do the 'if(state > 6)' check to reset it to zero. The '1' you're seeing on the display is just whatever happens to be in the memory location after the last dice.
A better way to do this would be like this:
Code: [Select]
// DICE_COUNT is the number of elements in the dValue array
state = (state+1) % DICE_COUNT;
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
you are 100% correct sir!
while you posted that, I had accidentally stumbled on the answer myself, but hadn't realized it yet, or why.
I am very nearly successful at my next stage of complexity with this project.
here is what I am doing with it now....
There is a push button to select through the range of dice modes.
there is a toggle switch to select rolling one dice or two dice at a time. (both dice must be of the same value though, ex - two d6.)
I can get the LCD to display the correct dice mode all through the count now, and I can get two different results with one shake of the dice.
my only problem now is the LCD seems to be trying to display both modes at the same time, and it makes the words flicker like mad, and is a bit hard to read everything.
I am using a switch case at the moment to handle switching between one and two dice per throw. I originally wrote it as an if/else, but changed it to a switch to see if that would cure my screen flicker. It seems to have helped a bit, but it is still very much an issue.
I am working my way through the LCD examples now trying to find out how to change the number of dice, and print two completely different screen readouts, without the flicker
if that makes any sense...
here is the code as I have it wrote now...
Code: [Select]
#include <LiquidCrystal.h>
LiquidCrystal lcd(2,3,4,5,6,7);
const int nButton = 11;
const int mButton = 9;
int mode = 0;
int oldMode = 0;
const int Tilt = 8;
int val = 0;
int oldVal = 0;
int state = 0;
int number = 0;
int dValue[] =
{2,4,6,8,10,12,20};
long result;
long resultA;
long resultB;
void setup()
{
pinMode(Tilt, INPUT);
pinMode(mButton, INPUT);
pinMode(nButton, INPUT);
randomSeed(analogRead(0));
}
void loop()
{
mode = digitalRead(mButton);
if ((mode == HIGH) && (oldMode == LOW))
{
state++;
}
if (state > 6)
{
state = 0;
delay(10);
}
oldMode = mode;
{
number = digitalRead(nButton);
switch(number)
{
case 0:
{
{
lcd.begin(16,2);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
}
val = digitalRead(Tilt);
if ((val == HIGH) && (oldVal == LOW))
{
result = random(1, dValue[state] + 1);
lcd.setCursor(0, 1);
lcd.print("Result = ");
lcd.print(result);
lcd.print(" ");
delay(3000);
} else
{
lcd.setCursor(0, 1);
lcd.print("Give Me A Shake!");
}
}
break;
case 1:
{
{
lcd.begin(16,2);
lcd.print("Roll Two - D");
lcd.print(dValue[state]);
}
val = digitalRead(Tilt);
if ((val == HIGH) && (oldVal == LOW))
{
resultA = random(1, dValue[state] + 1);
resultB = random(1, dValue[state] + 1);
lcd.setCursor(0,1);
lcd.print("R1 = ");
lcd.print(resultA);
lcd.print(" R2 = ");
lcd.print(resultB);
lcd.print(" ");
delay(3000);
} else
{
lcd.setCursor(0, 1);
lcd.print("C'mon...Shake Me!!");
}
}
break;
}
}
}
I seem to be off on my curly brackets somewhere, because my indenting is messed up a bit, but I will fix that when I have the whole code in working order.
off to keep researching, but will be happy to hear any suggestions to stabilize my display :)
ETA: I should mention that when you shake the "dice", the display does stabilize, and very clearly prints everything it is supposed to.
This makes me think I should have the mode selecting code in both cases, like the Tilt code, and maybe that will work it out.
seems very redundant to me, but this is how we learn ;)
Again, very happy to take suggestion improvements and general criticism on the project!
be brutal if need be :)
absence of proof is not proof of absence
HazardsMind
I don't think your lcd.print(" "); is long enough to clear the entire line in case 1, try to make it longer and see if that works.
My GitHub:
https://github.com/AndrewMascolo?tab=repositories
Neight
actually, I might not even need the " " anymore.
that was there to clear the rest of the line earlier on.
I did fix the flicker problem after all.
at some point I must have deleted the line of code that initializes the screen size and positions
(lcd.begin(16,2))
without that, it was writing the data over and over on the whole screen. The only reason text was visible at all is because of the repetitiveness of the message...
Now that I have put the initialization code back in the setup, it works like a charm, and is quite beautiful if you ask me :P
May not be the most efficient way to do it ( I am assuming it isn't anyway) but it works perfectly, and I quite enjoy it.
Now to add some effects to make it a bit more fun to watch.
trying to figure out some simple animation to represent the dice rolling for a moment before displaying the results.
Even had an idea to mount this in a project box, and make a small box inside the project box that would hold an actual dice. That way, when you shake the dice roller, there is a real dice rolling sound to go along with the shaking :) (kind of like the sound when you shake dice in a cup while playing yatzee. Matter of fact, this kind of makes me want to make a whole new code specifically to handle dice rolling for yatzee. building the new one around a full set of yatzee dice, and being able to roll them all at once, just like in the real game.
which makes me wonder how hard it would be to write code to play actual yatzee on the arduino would be....
good lord, I am getting ahead of myself here (to be honest, I have never played yatzee, so I don't know all the rules, but I know the game is structured around dice, so it has my brain spinning)
absence of proof is not proof of absence
PeterH
Rather than adding physical dice, you might continue the Arduino theme and see whether you can make your Arduino output the sound of dice rolling in a cup.
You might also add an accelerometer/tilt switch so that it makes the 'rolling' sound when you shake it, and rolls the dice when you put it down.
I only provide help via the forum - please do not contact me for private consultancy.
Neight
Rather than adding physical dice, you might continue the Arduino theme and see whether you can make your Arduino output the sound of dice rolling in a cup.
You might also add an accelerometer/tilt switch so that it makes the 'rolling' sound when you shake it, and rolls the dice when you put it down.
sounds like another shopping trip to me :P
First thoughts on that would be a music shield, or build my own, with an mp3 on an SD card of a dice rolling sound.
I have been wanting to pick up an accelerometer and play with it, and this is as good an excuse as any!
Great ideas BTW, and thanks for the suggestions. It would keep the project that much more compact not having to build a chamber large enough for a die to roll around in.
I also like the shake then put down to roll idea quite a bit. Anything to increase the sensation of rolling dice would be a good thing in my opinion.
thinking about an enclosure for this one, and making something permanent out of it. It's been a struggle, and it would make a fun souvenir when I get it all done. Could even have two larger modes, one for d2 - d20 for strategy gaming, and one that is just a set of multiple six sided dice for yatzee style, so you can roll both ways if you choose.
Lots of expansion could be done on this project, and this could really be a neat addition to any game where dice are involved :)
Heck, I might even try for a third option, and come up with my own dice based game that could be played with two (or more?) players, and could be played all on the arduino with no external game mechanics involved.
Wonder how long it would take me to find the limits of the arduino with a dice rolling project?...
absence of proof is not proof of absence
Neight
guess I am not doing as well as I hoped...
Tried rearranging the code to take out some elements I thought were redundant (like having the mode select code in both cases.)
now, the dice values displayed are not right again.
The roll results are still good, and the rate they cycle through the dice modes when you push the button is still correct also.
now, when you turn it on, it shows you are in D2 mode, which is correct, from there it goes through D4, D6, D8, D10, D12, then D20 which is also right.
Here is where it gets odd.
it hangs on 20 for two pushes, then goes to 40, 60, 80, 10, 12, 20, 20, 40, 60.... on like that.
after the first time though, it starts displaying the modes in tens, instead of single and double digits.
on the second pass, the dice modes are still right, but the name for them is wrong.
once again, I cannot see where the code is making the error, so without further ado...
Code: [Select]
#include <LiquidCrystal.h>
LiquidCrystal lcd(2,3,4,5,6,7);
const int nButton = 11;
const int mButton = 9;
int mode = 0;
int oldMode = 0;
const int Tilt = 8;
int val = 0;
int oldVal = 0;
int state = 0;
int number = 0;
int dValue[] =
{2,4,6,8,10,12,20};
long result;
long resultA;
long resultB;
void setup()
{
pinMode(Tilt, INPUT);
pinMode(mButton, INPUT);
pinMode(nButton, INPUT);
lcd.begin(16,2);
randomSeed(analogRead(0));
}
void loop()
{
{
mode = digitalRead(mButton);
if ((mode == HIGH) && (oldMode == LOW))
{
state++;
}
if (state >= 7)
{
state = 0;
delay(10);
} oldMode = mode;
}
{
number = digitalRead(nButton);
switch(number)
{
case 0:
{
{
lcd.setCursor(0,0);
lcd.print("DigiDice - D");
lcd.print(dValue[state]);
}
val = digitalRead(Tilt);
if ((val == HIGH) && (oldVal == LOW))
{
result = random(1, (dValue[state] + 1));
lcd.setCursor(0, 1);
lcd.print("Result = ");
lcd.print(result);
lcd.print(" ");
delay(3000);
} else
{
lcd.setCursor(0, 1);
lcd.print("Give Me A Shake!");
}
}
break;
case 1:
{
{
lcd.setCursor(0,0);
lcd.print("Roll Two - D");
lcd.print(dValue[state]);
}
val = digitalRead(Tilt);
if ((val == HIGH) && (oldVal == LOW))
{
resultA = random(1, (dValue[state] + 1));
resultB = random(1, (dValue[state] + 1));
lcd.setCursor(0,1);
lcd.print("R1 = ");
lcd.print(resultA);
lcd.print(" R2 = ");
lcd.print(resultB);
lcd.print(" ");
delay(3000);
} else
{
lcd.setCursor(0, 1);
lcd.print("C'mon...Shake Me!!");
}
}
break;
}
}
}
This is where I have it at now.
The mode select part of the code seems to be working right, and I have wrote it several different ways, to test it.
The code to show the dice value isn't something I changed, and it went from working fine to not. Don't know what happened.
any help would be awesome.
it is possible I just need fresh eyes on it, and the problem is fairly obvious, but for right now, I am stumped.
absence of proof is not proof of absence
PeterH
thinking about an enclosure for this one, and making something permanent out of it. It's been a struggle, and it would make a fun souvenir when I get it all done.
Have you considered using a cup of some sort as the enclosure? Perhaps with the display in the base so that you shake it, put it down upside down, and see the rolled values displayed in the base? Just make sure nobody tries to drink out of it. :)
I only provide help via the forum - please do not contact me for private consultancy.
Neight
thinking about an enclosure for this one, and making something permanent out of it. It's been a struggle, and it would make a fun souvenir when I get it all done.
Have you considered using a cup of some sort as the enclosure? Perhaps with the display in the base so that you shake it, put it down upside down, and see the rolled values displayed in the base? Just make sure nobody tries to drink out of it. :)
Man, you are full of great ideas!
I have been looking into 3D printers lately, and could even design and print something that could fit the hardware and battery in nice and secure.
very cool, I really did hope this project would turn into something fun and long term, that I could actually use. I really liked the idea, and its only getting better as I move on!
absence of proof is not proof of absence
djjoshuad
it hangs on 20 for two pushes, then goes to 40, 60, 80, 10, 12, 20, 20, 40, 60.... on like that.
after the first time though, it starts displaying the modes in tens, instead of single and double digits.
This one I know :)
What's happening is you're overwriting the two digit number with a one digit number... with nothing to write in the second digit, the LCD continues to display what was already there. Try using sprintf to format your number. That should get rid of the unwanted 0
Neight
it hangs on 20 for two pushes, then goes to 40, 60, 80, 10, 12, 20, 20, 40, 60.... on like that.
after the first time though, it starts displaying the modes in tens, instead of single and double digits.
This one I know :)
What's happening is you're overwriting the two digit number with a one digit number... with nothing to write in the second digit, the LCD continues to display what was already there. Try using sprintf to format your number. That should get rid of the unwanted 0
HA! you were right :D
I should have seen that, it wasn't the first time I ran into that problem, but last time it was much more obvious, I had a whole word that never left the screen.
I took the easy way out for a fix though, I just added a couple of spaces next to the result so it turns off the couple of spaces after every number.
Not sure why that started all of a sudden, but it's fixed now, and I probably just accidentally deleted something when I was rearranging the code...
Thank you so much for pointing out my error!
Now to start having fun with the display, make it a little more showy :)
absence of proof is not proof of absence
Go Up
| __label__pos | 0.712615 |
6
Suppose I have some data in a file or maybe database. It could be JSON, XML, YAML, CSV, String[], etc.
I'd like to create a model object with this data. For example:
Data:
{
"name": "John Doe",
"age": "30"
}
Model (Pseudocode):
class Person {
Person(name, age) {
this.name = name;
this.age = age;
}
// business logic
}
Some code that creates Person objects from JSON data (Pseudocode):
peopleArray = [];
recordSet = aJSONReader.read('file');
for (recordSet as r) {
peopleArray.add(new Person(r[0], r[1]));
}
What would you use to build model objects from given data? In my example I'd start supporting JSON. What if I'd like to change it or support new data formats? How do I decouple this code? Which design pattern fit here?
3 Answers 3
4
Use the strategy pattern (see here). You want to provide different methods to parse data. A method would parse JSON, another method would parse XML and another method would read a database. Each method can be seen as a strategy to parse data and generate data objects.
Create a common interface, let's say IDataObjectParser with a single method like public List<DataObject> parse(). Each parser would implement this interface. Then you can exchange the parser whenever you want, e.g. during runtime or according to a configuration file.
0
1
I like this technique. Create an abstract object who's responsibility will be to provide attributes. In this case name and age.
interface PersonInput {
public String getName();
public int getAge();
}
Have Person class use that object in the constructor
class Person {
public Person(PersonInput input) {
name = input.getName();
age = input.getAge();
}
}
Now you can have many implementations of PersonInput each dealing with different data format (CSV, XML, etc.)
JSON example:
class JsonPersonInput implements PersonInput {
private String name;
private int age;
public JsonPersonInput(String json) throws JSONException {
JSONObject data = new JSONObject(json);
name = data.getString("name");
age = data.getInt("age");
}
public String getName() {
return name;
}
public int getAge() {
return age;
}
}
You use it like this
new Person(new JsonPersonInput(jsonString))
2
• Interesting. Is it a some sort of design pattern? Commented Apr 10, 2014 at 14:45
• Depends on your perspective, many design patterns are entangled in a particular solution. I would not stress over trying to find formal design patterns too much. Commented Apr 10, 2014 at 15:03
0
If by 'data' you are referring to a persistence mechanism then this is the perfect situation for Data Access Objects (or DAOs).
This is most commonly associated with Java web applications and implementations for RDBMSs but it has applications in all Java applications requiring persistence.
You only need define a DAO interface for your person, say PersonDAO with two methods on it getPerson() and savePerson().
interface PersonDAO {
public Person readPerson(String path);
public void addPerson(Person personToBeSaved);
}
Then create another class, say one for your JSON implementation, that implements your DAO, let's call ours JsonPersonDAO.
If you are using a factory to generate your Person objects you then only need to change the DAO implementation that you are using in a single place when the need arises. If you are generating your Person objects from inside your class you only need to change what DAO implementation it uses.
Further reading on this here:
http://www.oracle.com/technetwork/java/dataaccessobject-138824.html
http://best-practice-software-engineering.ifs.tuwien.ac.at/patterns/dao.html
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.998426 |
2
$\begingroup$
I started learning about key sharing and Shamir secret sharing, I'm wondering whether you need someone who knows the key initially and then distributes it, is this step necessary or not?
In my case I have a group of people that run a permission blockchain (they are the consensus nodes) and I want all incoming transaction to be encrypted but the decryption part should be available only if a majority decides to construct the decryption key, but I don't want this decryption key to be known initially by anyone. Is it possible or not?
$\endgroup$
2
$\begingroup$
You can use MPC in order to generate a key that is distributed amongst a set of users, without anyone knowing the key itself. This is a very standard MPC problem, and many different protocols can be used. For example, if you want to generate a key for ECIES encryption, then you can basically have each party $P_i$ choose a random $x_i\in\mathbb{Z}_q$ and send a commitment to $Q_i = x_i\cdot G$ along with a zero knowledge proof of knowledge of $x_i$. Then, after all commitments are received, all parties decommit and ZK proofs are verified. Finally, you define the public key to be $Q=\sum_{i=1}^n Q_i$. This will give you a plain additive sharing. You can use a similar idea to get a Shamir sharing as well.
| improve this answer | |
$\endgroup$
• $\begingroup$ Can we remove the decommitment? $\endgroup$ – kelalaka Jul 1 at 14:16
• 1
$\begingroup$ @kelalaka, in general no, otherwise an attacker can wait to see everyone else's $Q_i$ and then submit their own based on that. $\endgroup$ – Aman Grewal Jul 1 at 15:29
• $\begingroup$ Prof. If you have time, could you write a canonical answer to Secret sharing - no dealer, modifiable, verifiable. $\endgroup$ – kelalaka Jul 2 at 13:18
• $\begingroup$ @kelalaka I tried to give an answer. It is a very general question, so really the answer is - it can be done with MPC. In order to try to answer it directly for Secret Sharing, one would need a full paper. I hope this is what you were looking for, but please let me know if not. $\endgroup$ – Yehuda Lindell Jul 6 at 10:23
• $\begingroup$ Thanks a lot. Let me read it. FYI, when an answer is given the bounty owner is notified, too. $\endgroup$ – kelalaka Jul 6 at 10:24
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.971867 |
jQuery Datepicker restrict second date based on first
<input type="text" id="dt1">
<input type="text" id="dt2">
<script>
$(document).ready(function () {
$("#dt1").datepicker({
dateFormat: "dd-M-yy",
minDate: 0,
onSelect: function (date) {
var date2 = $('#dt1').datepicker('getDate');
date2.setDate(date2.getDate() + 1);
$('#dt2').datepicker('setDate', date2);
//sets minDate to dt1 date + 1
$('#dt2').datepicker('option', 'minDate', date2);
}
});
$('#dt2').datepicker({
dateFormat: "dd-M-yy",
onClose: function () {
var dt1 = $('#dt1').datepicker('getDate');
console.log(dt1);
var dt2 = $('#dt2').datepicker('getDate');
if (dt2 <= dt1) {
var minDate = $('#dt2').datepicker('option', 'minDate');
$('#dt2').datepicker('setDate', minDate);
}
}
});
});
</script> | __label__pos | 0.998666 |
Программное решение задач ЕГЭ по информатике
Задача 5923. Источник: Поляков. Задание КИМ 16
Страница задачи 5923
(Е. Джобс) Алгоритм вычисления функции F(n), где n – неотрицательное число, задан следующими соотношениями:
F(1) = 2,
F(n) = F(n-1)·3n % 5 / 3n % 7
Чему равно значение выражения F(1025) / F(1030)? В ответе запишите только целое число. Примечание: операция a % b находит остаток от деления числа a на число b.
Решение
В данной задаче нет смысла вычислять значения элементов, т.к. с некоторого элемента значение станет настолько маленьким, что будет приравнено к нулю. В качестве результата будем вычислять степень тройки, которую и используем для вычисления ответа.
Python
f = [0] * 1031
for n in range(2, 1031):
f[n] = f[n - 1] + n % 5 - n % 7
print(3 ** (f[1025] - f[1030]))
PascalABC
var
n: Integer;
f: array[1..1030] of Integer;
begin
f[1] := 0;
for n := 2 to 1030 do
f[n] := f[n - 1] + n mod 5 - n mod 7;
Writeln(trunc(power(3, f[1025] - f[1030])));
end.
C++
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int f[1031];
f[1] = 0;
for (int n = 2; n < 1031; n++)
f[n] = f[n - 1] + n % 5 - n % 7;
cout << (int)pow(3, f[1025] - f[1030]);
}
Ответ
729
Отправить замечание по решению
Код по которому имеется замечание:
Ваш вариант кода:
Комментарий: | __label__pos | 0.745068 |
Media
Is it just me, or does it seem like the whole multilayered brouhaha (yes, I used the word ‘brouhaha’. Wanna fight aboudit?) about DRM, DCMA, RIAA, MPAA, WTFA is really a sideline from a more important question? The question is, why are we accepting this crazy model of top-down entertainment? Why aren’t we making music for our friends (and only for our friends), or stopping by a neighbor’s house when they’re doing a little play? Why are we even looking for entertainment instead of expression and communication? Why are we afraid to believe our own stories could be as enthralling as those enacted by someone who wouldn’t deign to appear for less than $10 million?
Leave a Reply
Your email address will not be published. Required fields are marked * | __label__pos | 0.970656 |
Ratio Calculator
Created by Piotr Małek and Julia Żuławińska
Last updated: May 19, 2020
The ratio calculator will help you compute identical ratios given three of the four parts of the two ratios. A ratio is the relationship between two quantities, very often represented as a fraction. It displays how much of one part is contained in another part, basically representing a fractional or percentage amount of the whole. Before we can use the calculator, we need to understand how to do ratios and how to find a ratio.
How to do ratios
A ratio is made up of two parts, the same as how a fraction is made up of two parts. There is the numerator (the top number of the fraction) and the denominator (the bottom number of the fraction). For example, suppose there is a pie cut into eight slices and three of the eight slices have been eaten. If we want to know the ratio of slices eaten compared to the entire pie, then we have to put the number eaten as the numerator and the total number of pieces as the denominator; 3/8. That is the most basic of ratios since no simplification is involved. But what if we want to simplify or scale up the ratio to a larger, yet equivalent ratio? The next section on how to find a ratio will explain the process.
How to find a ratio
Suppose we have the same ratio of 3/8 but we want to scale it up to a larger, equivalent ratio with a denominator of 72. The way to do this is to set up a proportion, which is two ratios equal to each other and solve for the missing part. This is done as follows:
1. Write both ratios in terms of fractions, labeling the missing part with an x
2. Set the fractions equal to each other, forming a proportion.
3. Use the process of cross multiplication to isolate the variable.
4. Solve for the variable.
5. Use the ratio calculator to check your answer.
In the above example, the steps would look as follows:
1. 3/8 = x/72
2. 8 * x = 72 * 3
3. 8x = 216
4. x = 27
For more complex ratios involving larger numbers or decimals, the ratio calculator is much more convenient to use. The proportion calculator, which does the same thing, may also be used to solve problems such as the one above.
The golden ratio
Golden ratio line
The golden ratio is a special ratio that is achieved when two quantities have the same ratio as the ratio of their sum to the larger of the two quantities. If the two quantities are denoted at a and b, then the golden ratio is (a+b)/a = a/b. The value of this ratio is approximately 1.618. The golden ratio calculator is handy to compute this ratio.
It has been said that the rectangle that is most aesthetically pleasing to the eye is the golden rectangle. This is a rectangle with length a + b and width a. The rectangle is golden if (a+b)/a = a/b. The golden rectangle calculator will compute the length and width necessary to achieve the perfect golden rectangle.
The ratio calculator is also useful in the geometric application of similar triangles. If the sides of one triangle are in proportion with the sides of another triangle, the two triangles are said to be similar. This applies to other polygons as well.
Piotr Małek and Julia Żuławińska
Ratio of...
two numbers - A:B
I would like to...
find an equivalent of a ratio.
A : B = C : D
A
B
C
D
People also viewed…
Car vs. Bike
Everyone knows that biking is awesome, but only this Car vs. Bike Calculator turns biking hours into trees! 🌳
Proportion
Proportion calculator helps find equivalent proportions given three of the four parts of the two ratios.
Quotient
The quotient calculator will help you divide two numbers and return both fractional and remainder results.
main background | __label__pos | 0.96819 |
Page MenuHomePhabricator
Doesn't jump back to the slice where planarfigure was drawn after reselection
Closed, DuplicatePublic
Description
Steps for to reproduce:
-Open a dicom dataset
-Select the Measurementbundle
-Draw a planarfigure(e.g. line)
-Chance the slice
-Reselect the planarfigure in the DataManager
-It doesn't jump back to the slice where the planarfigure was drawn!!!! | __label__pos | 0.98856 |
123
$\begingroup$
For my own understanding, I am interested in manually replicating the calculation of the standard errors of estimated coefficients as, for example, come with the output of the lm() function in R, but haven't been able to pin it down. What is the formula / implementation used?
$\endgroup$
• 10
$\begingroup$ good question, many people know the regression from linear algebra point of view, where you solve the linear equation $X'X\beta=X'y$ and get the answer for beta. Not clear why we have standard error and assumption behind it. $\endgroup$ – Haitao Du Jul 19 '16 at 13:42
131
$\begingroup$
The linear model is written as $$ \left| \begin{array}{l} \mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \\ \mathbf{\epsilon} \sim N(0, \sigma^2 \mathbf{I}), \end{array} \right.$$ where $\mathbf{y}$ denotes the vector of responses, $\mathbf{\beta}$ is the vector of fixed effects parameters, $\mathbf{X}$ is the corresponding design matrix whose columns are the values of the explanatory variables, and $\mathbf{\epsilon}$ is the vector of random errors.
It is well known that an estimate of $\mathbf{\beta}$ is given by (refer, e.g., to the wikipedia article) $$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$$ Hence $$ \textrm{Var}(\hat{\mathbf{\beta}}) = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}, $$ [reminder: $\textrm{Var}(AX)=A\times \textrm{Var}(X) \times A′$, for some random vector $X$ and some non-random matrix $A$]
so that $$ \widehat{\textrm{Var}}(\hat{\mathbf{\beta}}) = \hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}, $$ where $\hat{\sigma}^2$ can be obtained by the Mean Square Error (MSE) in the ANOVA table.
Example with a simple linear regression in R
#------generate one data set with epsilon ~ N(0, 0.25)------
seed <- 1152 #seed
n <- 100 #nb of observations
a <- 5 #intercept
b <- 2.7 #slope
set.seed(seed)
epsilon <- rnorm(n, mean=0, sd=sqrt(0.25))
x <- sample(x=c(0, 1), size=n, replace=TRUE)
y <- a + b * x + epsilon
#-----------------------------------------------------------
#------using lm------
mod <- lm(y ~ x)
#--------------------
#------using the explicit formulas------
X <- cbind(1, x)
betaHat <- solve(t(X) %*% X) %*% t(X) %*% y
var_betaHat <- anova(mod)[[3]][2] * solve(t(X) %*% X)
#---------------------------------------
#------comparison------
#estimate
> mod$coef
(Intercept) x
5.020261 2.755577
> c(betaHat[1], betaHat[2])
[1] 5.020261 2.755577
#standard error
> summary(mod)$coefficients[, 2]
(Intercept) x
0.06596021 0.09725302
> sqrt(diag(var_betaHat))
x
0.06596021 0.09725302
#----------------------
When there is a single explanatory variable, the model reduces to $$y_i = a + bx_i + \epsilon_i, \qquad i = 1, \dotsc, n$$ and $$\mathbf{X} = \left( \begin{array}{cc} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_n \end{array} \right), \qquad \mathbf{\beta} = \left( \begin{array}{c} a\\b \end{array} \right)$$ so that $$(\mathbf{X}^{\prime} \mathbf{X})^{-1} = \frac{1}{n\sum x_i^2 - (\sum x_i)^2} \left( \begin{array}{cc} \sum x_i^2 & -\sum x_i \\ -\sum x_i & n \end{array} \right)$$ and formulas become more transparant. For example, the standard error of the estimated slope is $$\sqrt{\widehat{\textrm{Var}}(\hat{b})} = \sqrt{[\hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}]_{22}} = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}.$$
> num <- n * anova(mod)[[3]][2]
> denom <- n * sum(x^2) - sum(x)^2
> sqrt(num / denom)
[1] 0.09725302
$\endgroup$
• $\begingroup$ Thanks for the thorough answer. So, I take it the last formula doesn't hold in the multivariate case? $\endgroup$ – ako Dec 1 '12 at 18:18
• 1
$\begingroup$ No, the very last formula only works for the specific X matrix of the simple linear model. In the multivariate case, you have to use the general formula given above. $\endgroup$ – ocram Dec 2 '12 at 7:21
• 4
$\begingroup$ +1, a quick question, how does $Var(\hat\beta)$ come? $\endgroup$ – avocado Feb 9 '14 at 9:32
• 6
$\begingroup$ @loganecolss: It comes from the fact that $\text{Var}(AX)=A\text{Var(X)}A'$, for some random vector $X$ and some non-random matrix $A$. $\endgroup$ – ocram Feb 9 '14 at 9:38
• 4
$\begingroup$ note that these are the right answers for hand calculation, but the actual implementation used within lm.fit/summary.lm is a bit different, for stability and efficiency ... $\endgroup$ – Ben Bolker Nov 8 '15 at 19:51
28
$\begingroup$
The formulae for these can be found in any intermediate text on statistics, in particular, you can find them in Sheather (2009, Chapter 5), from where the following exercise is also taken (page 138).
The following R code computes the coefficient estimates and their standard errors manually
dfData <- as.data.frame(
read.csv("http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv",
header=T))
# using direct calculations
vY <- as.matrix(dfData[, -2])[, 5] # dependent variable
mX <- cbind(constant = 1, as.matrix(dfData[, -2])[, -5]) # design matrix
vBeta <- solve(t(mX)%*%mX, t(mX)%*%vY) # coefficient estimates
dSigmaSq <- sum((vY - mX%*%vBeta)^2)/(nrow(mX)-ncol(mX)) # estimate of sigma-squared
mVarCovar <- dSigmaSq*chol2inv(chol(t(mX)%*%mX)) # variance covariance matrix
vStdErr <- sqrt(diag(mVarCovar)) # coeff. est. standard errors
print(cbind(vBeta, vStdErr)) # output
which produces the output
vStdErr
constant -57.6003854 9.2336793
InMichelin 1.9931416 2.6357441
Food 0.2006282 0.6682711
Decor 2.2048571 0.3929987
Service 3.0597698 0.5705031
Compare to the output from lm():
# using lm()
names(dfData)
summary(lm(Price ~ InMichelin + Food + Decor + Service, data = dfData))
which produces the output:
Call:
lm(formula = Price ~ InMichelin + Food + Decor + Service, data = dfData)
Residuals:
Min 1Q Median 3Q Max
-20.898 -5.835 -0.755 3.457 105.785
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -57.6004 9.2337 -6.238 3.84e-09 ***
InMichelin 1.9931 2.6357 0.756 0.451
Food 0.2006 0.6683 0.300 0.764
Decor 2.2049 0.3930 5.610 8.76e-08 ***
Service 3.0598 0.5705 5.363 2.84e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 13.55 on 159 degrees of freedom
Multiple R-squared: 0.6344, Adjusted R-squared: 0.6252
F-statistic: 68.98 on 4 and 159 DF, p-value: < 2.2e-16
$\endgroup$
• $\begingroup$ Nice trick with the solve() function. This would be quite a bit longer without the matrix algebra. Is there a succinct way of performing that specific line with just basic operators? $\endgroup$ – ako Dec 1 '12 at 18:57
• 1
$\begingroup$ @AkselO There is the well-known closed form expression for the OLS estimator, $\widehat{\boldsymbol{\beta}} = (\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}\boldsymbol{Y}$, which you can compute by explicitly computing the inverse of the $(\mathbf{X}'\mathbf{X})$ matrix (as @ ocram has done), but this gets tricky with ill-conditioned matrices. $\endgroup$ – tchakravarty Dec 1 '12 at 19:07
• 1
$\begingroup$ The book website is now at gattonweb.uky.edu/sheather/book. $\endgroup$ – user262709 Oct 25 '19 at 12:33
0
$\begingroup$
Part of Ocram's answer is wrong. Actually:
$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y} - (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}.$
$E(\hat{\mathbf{\beta}}) = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$
And the comment of the first answer shows that more explanation of variance of coefficient is needed:
$\textrm{Var}(\hat{\mathbf{\beta}}) = E(\hat{\mathbf{\beta}}-E(\hat{\mathbf{\beta}}))^2=\textrm{Var}(- (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}) =(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}$
Edit
Thanks, I $\mathbf{wrongly}$ ignored the hat on that beta. The deduction above is $\mathbf{wrong}$. The correct result is:
1.$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$ (To get this equation, set the first order derivative of $\mathbf{SSR}$ on $\mathbf{\beta}$ equal to zero, for maxmizing $\mathbf{SSR}$)
2.$E(\hat{\mathbf{\beta}}|\mathbf{X}) = E((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} (\mathbf{X}\mathbf{\beta}+\mathbf{\epsilon})|\mathbf{X}) = \mathbf{\beta} + ((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime})E(\mathbf{\epsilon}|\mathbf{X}) = \mathbf{\beta}.$
3.$\textrm{Var}(\hat{\mathbf{\beta}}) = E(\hat{\mathbf{\beta}}-E(\hat{\mathbf{\beta}}|\mathbf{X}))^2=\textrm{Var}((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}) =(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}$
Hopefully it helps.
$\endgroup$
• 1
$\begingroup$ The derivation of the OLS estimator for the beta vector, $\hat{\boldsymbol \beta} = ({\bf X'X})^{-1}{\bf X'Y}$, is found in any decent regression textbook. In light of that, can you provide a proof that it should be $\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y} - (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}$ instead? $\endgroup$ – gung - Reinstate Monica Apr 6 '16 at 3:40
• 5
$\begingroup$ Your $\hat\beta$ is not even an estimator, because $\epsilon$ is not observable! $\endgroup$ – whuber Apr 6 '16 at 14:55
• $\begingroup$ This can also be viewed in this video: youtube.com/watch?v=jyBtfhQsf44 $\endgroup$ – StatsStudent Apr 7 '16 at 23:06
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.999824 |
Need Bridging help
Discussion in 'Miscellaneous Game Modding' started by CKYamada, Jan 1, 2009 with 4 replies and 328 views.
1. CKYamada
CKYamada Newbie
Messages:
15
Ratings:
4
can someone send a FR to either collossus or bu id id h i st. Need help setting up the bridge
2. KranK
KranK Member
Messages:
4,125
Ratings:
609
Follow a tutorial.
3. TehPhelix
TehPhelix Retired Retired
Messages:
7,439
Ratings:
7,106
4. Nubble
Nubble Getting There
Messages:
1,991
Ratings:
314
The reasons that people don't respond to these "Help Me Bridge" threads is because there are at least five (5) tutorials on this forum, if you have specific questions, post it on a thread that is labeled "bridging questions" or something like that, I'm sure that if you post it there, someone will get back to you (Me or Krank)
5. KranK
KranK Member
Messages:
4,125
Ratings:
609 | __label__pos | 0.985366 |
Boot hanging due to cold plug devices failure
Linode Staff
My Linode says it is powered on but I'm unable to reach it via ping or SSH. When I check the console via LISH, it looks like its hanging at the following message:
* Stopping cold plug devices [fail]
The disk drive for / is not ready yet or not present.
keys:Continue to wait, or Press S to skip mounting or M for manual recovery
What's going on?
1 Reply
These messages seen in the console are very helpful in figuring out what could be going on. This specific message seems to be the result of a failed system startup, explaining why you're unable to access the Linode from the outside.
The specific error you've provided is typically the result of inconsistencies with distribution specific packages/dependencies. Reconfiguring all packages should resolve this but you'll need to boot into Rescue Mode to do so.
To boot into Rescue Mode, select your Linode from with the Cloud Manager and click on the Rescue tab. After you select the disks needed to mount, press Submit. The Linode will reboot into Rescue Mode and you can connect with Lish by pressing the Launch Console button.
While booted in Rescue Mode, run the following commands:
mount -o exec,barrier=0 /dev/sda
cd /media/sda
mount -t proc proc proc/
mount -t sysfs sys sys/
mount -o bind /dev dev/
mount -t devpts pts dev/pts/
chroot /media/sda /bin/bash
dpkg --configure -a
Once you've completed that, reboot your Linode to get it out of Rescue Mode and you should be good to go!
Reply
Please enter an answer
Tips:
You can mention users to notify them: @username
You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.
> I’m a blockquote.
I’m a blockquote.
[I'm a link] (https://www.google.com)
I'm a link
**I am bold** I am bold
*I am italicized* I am italicized
Community Code of Conduct | __label__pos | 0.817738 |
12
$\begingroup$
If you hash a string using SHA-256 on your computer, and I hash the same string using SHA-256 on my computer, will we generate the same value? Does the algorithm depend on a seed (so we'd both need the same seed) or some other such parameter?
edit: To clarify, by 'string' I meant 'the same byte input', which as the comments and @ispiro's answer point out, may be different for the same character string depending on the encoding.
$\endgroup$
• 16
$\begingroup$ You have to be careful what you mean by "string". A string is a sequence of characters, while hash function usually process byte sequences. If you use different encodings to map strings to byte sequences this can give you different hashvalues. $\endgroup$ – Drunix Feb 5 '18 at 12:05
• 2
$\begingroup$ No, it will depend on encoding. The selected answer is misleading. Maybe take a look at this: stackoverflow.com/questions/47963143/… $\endgroup$ – Koray Tugay Feb 5 '18 at 12:57
• $\begingroup$ Also, while there is no "seed" (aka "salt") in pure hash functions, some libraries include the generation of the salt as part of the function they call "hash" (even though, strictly speaking, hashing is the part after you've integrated the input and the salt and done the encoding). So the accepted answer isn't wrong, if you are talking about pure hashing, but other readers may be mislead if they think their libraries "hash" function only hashes the input string. $\endgroup$ – Guy Schalnat Feb 5 '18 at 16:09
• 1
$\begingroup$ Even beyond the fact text is not a bytestring without encoding, if the character strings "look the same", may even compare the same, there's a variety characters that look identical, aren't printable, or are zero-width. $\endgroup$ – Nick T Feb 5 '18 at 17:50
• 5
$\begingroup$ OP can confirm but I think the question is assuming the same byte sequence and asking more about how the algorithm behaves given the same input. $\endgroup$ – xdhmoore Feb 5 '18 at 19:25
29
$\begingroup$
Yes, if you hash the same input with the same function, you will always get the same result.
This follows from the fact that it is a hash-function. By definition a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.
In practice there is no seed involved in evaluating a hash-function.
Now, this is how things work in practice. On the theoretical side of things, we often talk about families of hash-functions. In that case there does exist a key that selects which member of the family we are using. The reason for this is a technical problem with the definition of collision resistance.
The naive definition of collision resistance for a single hash function $H : \{0,1\}^* \to \{0,1\}^n$ would be that for all efficient algorithms $\mathcal{A}$ the following probability is negligible $$\Pr[(x_1,x_2)\gets\mathcal{A}(1^n): H(x_1)=H(x_2)]$$
The problem with that is, that it is impossible to achieve. Given that $H$ is compressing, collisions necessarily exist. So an algorithm $\mathcal{A}$ that simply has one of those collision hardcoded and outputs it, has $$\Pr[(x_1,x_2)\gets\mathcal{A}(1^n): H(x_1)=H(x_2)] = 1.$$ So the definition is not achievable, since this $\mathcal{A}$ by definition exists even though nobody might know what it is.
To solve this problem, we define collision resistance for a family of hash-functions $\{H_k : \{0,1\}^* \to \{0,1\}^n\}_k$. We then define that such a family is collision resistant if it holds that the following probability is negligible $$\Pr_{k\gets\{0,1\}^n}[(x_1,x_2)\gets\mathcal{A}(k): H_k(x_1)=H_k(x_2)].$$
Here we do not run into the same problem, because the exact function $\mathcal{A}$ needs to find a collision for is chosen uniformly at random from an exponentially large family. Since $\mathcal{A}$ could have hardcoded collisions for at most a polynomial number of functions in the family, such hash-function families are not trivially impossible.
Note that this means that there somewhat of a disconnect between the theoretical treatment of hash-functions and their practical use.
$\endgroup$
• $\begingroup$ Would you mind telling me where I could read up on the syntax for those formulae? $\endgroup$ – SwiftsNamesake Feb 5 '18 at 16:43
• $\begingroup$ @SwiftsNamesake Would you mind being slightly more specific about which parts are confusing to you? $\endgroup$ – Maeher Feb 5 '18 at 16:46
• $\begingroup$ I'm just generally curious about how I should interpret those expressions. They remind me a bit of list comprehensions, though. $\endgroup$ – SwiftsNamesake Feb 5 '18 at 17:06
• 2
$\begingroup$ @SwiftsNamesake: $1^n$ means (here) a bitstring of $n$ bits at 1. It belongs to $\{0,1\}^n$, the set of exactly $n$-bit bitstrings, having $2^n$ elements (where that's a number!). $\{0,1\}^∗$ is, in principle, the (infinite) set of all bitstrings (of finite but unbounded length), although that often in practice ends to be the (immense but finite) set of all bitstrings less than $2^{64}$ or $2^{128}$ bits. $\displaystyle\Pr_{k\gets\{0,1\}^n}[\operatorname{foo}(k)]$ is the probability that $\operatorname{foo}(k)$ holds for a uniformly random $k$-bit bitstring. $\endgroup$ – fgrieu Feb 5 '18 at 17:46
• 2
$\begingroup$ @SwiftsNamesake: $\{H_k : \{0,1\}^* \to \{0,1\}^n\}_k$ is a set of functions from $\{0,1\}^*$ to $\{0,1\}^n$ parametrized by $k$, each noted $H_k$. And $\displaystyle\Pr_{k\gets\{0,1\}^n}[(x_1,x_2)\gets\mathcal{A}(k): H_k(x_1)=H_k(x_2)]$ is the probability that for a uniformly random $k$-bit bitstring, algorithm $\mathcal{A}$, when fed $k$ as input, outputs a pair of (implicitly: distinct) bitstrings such that the member $H_k$ of the hash family collides for these bitstrings. $\endgroup$ – fgrieu Feb 5 '18 at 18:05
9
$\begingroup$
Strings aren't byte arrays.
The accepted answer deals with the question of whether SHA256 includes a seed. (Though the proof from the word "function" is arguable, since we call password-to-key functions "functions" though they can include "salt and pepper".) But strings still need to be encoded into bytes to be hashed.
Expounding on Drunix's comment, a quick search has revealed that it's quite likely that identical strings return different hash values owing to the strings being encoded in different encodings.
Here's a highly upvoted answer on StackOverflow suggesting using either UTF8 or UTF16 ("unicode" in the answer), which would nominally return different bytes and therefore different hashes.
And here's an answer using ASCII. Which "uses replacement fallback to replace each string that it cannot encode and each byte that it cannot decode with a question mark ("?") character." (MSDN) Again, returning a different hash than a UTF8 encoded string.
Additionally, take the following answer on StackOverflow. It mentions how macOS (Apple's Mac operating system) stores file names in a specific (unexpected?) way so that certain strings will "change" (at least in their byte representation).
And, of course, if your string comes from a text file, it will depend on the file's encoding. Notepad defaults (at least on my computer) to ANSI.
$\endgroup$
• 1
$\begingroup$ Why didn't you also mention EBCDIC? Or computers with non-8-bit bytes ;) $\endgroup$ – Hagen von Eitzen Feb 5 '18 at 20:58
• $\begingroup$ @HagenvonEitzen I was going for hieroglyphs, but felt under qualified :) But seriously, I meant to point out likely problems. $\endgroup$ – ispiro Feb 5 '18 at 21:03
• 2
$\begingroup$ @HagenvonEitzen I actually implemented SHA-256 on a 6-bit IBM 1401 mainframe so it could mine Bitcoin. Unfortunately, at the rate of 80 seconds per hash it would take this 1960s punch card business computer much more than the universe's lifetime to mine a block so it wasn't cost-effective. $\endgroup$ – Ken Shirriff Feb 5 '18 at 22:09
• $\begingroup$ I've edited my question to address your point and the comments. @Maeher correctly interpreted what I had meant by my question. $\endgroup$ – conor Feb 6 '18 at 0:06
• $\begingroup$ This answer makes essentially the same mistake that it's trying to correct. The reality is that the term "string" is ambiguous: it often refers to a sequence of bytes (typically octets), occasionally to a sequence of characters (such as Unicode characters), frequently to something approximating a sequence of characters (such as 16-bit unsigned integers intended for interpretation as UTF-16 code units), and sometimes to more than one of these (such as when the only character encoding available is a single-byte encoding). $\endgroup$ – ruakh Feb 6 '18 at 8:22
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.809376 |
15
$\begingroup$
I have random variables $X_0,X_1,\dots,X_n$. $X_0$ has a normal distribution with mean $\mu>0$ and variance $1$. The $X_1,\dots,X_n$ rvs are normally distributed with mean $0$ and variance $1$. Everything is mutually independent.
Let $E$ denote the event that $X_0$ is the largest of these, i.e., $X_0 > \max(X_1,\dots,X_n)$. I want to calculate or estimate $\Pr[E]$. I'm looking for an expression for $\Pr[E]$, as a function of $\mu,n$, or a reasonable estimate or approximation for $\Pr[E]$.
In my application, $n$ is fixed ($n=61$) and I want to find the smallest value for $\mu$ that makes $\Pr[E] \ge 0.99$, but I'm curious about the general question as well.
$\endgroup$
• $\begingroup$ How large is $n$? There ought to be some good asymptotic expressions based on large-sample theory. $\endgroup$ – whuber Nov 15 '12 at 19:59
• $\begingroup$ @whuber, thanks! I edited the question: in my case $n=61$. Even if $n=61$ isn't large enough to count as large, if there are good asymptotic estimates in the case where $n$ is large, that'd be interesting. $\endgroup$ – D.W. Nov 15 '12 at 20:18
• 5
$\begingroup$ Using numerical integration, $\mu \approx 4.91912496$. $\endgroup$ – whuber Nov 15 '12 at 20:33
15
$\begingroup$
The calculation of such probabilities has been studied extensively by communications engineers under the name $M$-ary orthogonal signaling where the model is that one of $M$ equal-energy equally likely orthogonal signals being transmitted and the receiver attempting to decide which one was transmitted by examining the outputs of $M$ filters matched to the signals. Conditioned on the identity of the transmitted signal, the sample outputs of the matched filters are (conditionally) independent unit-variance normal random variables. The sample output of the filter matched to the signal transmitted is a $N(\mu,1)$ random variable while the outputs of all the other filters are $N(0,1)$ random variables.
The conditional probability of a correct decision (which in the present context is the event $C = \{X_0 > \max_i X_i\}$) conditioned on $X_0 = \alpha$ is $$P(C \mid X_0 = \alpha) = \prod_{i=1}^n P\{X_i < \alpha \mid X_0 = \alpha\} = \left[\Phi(\alpha)\right]^n$$ where $\Phi(\cdot)$ is the cumulative probability distribution of a standard normal random variable, and hence the unconditional probability is $$P(C) = \int_{-\infty}^{\infty}P(C \mid X_0 = \alpha) \phi(\alpha-\mu)\,\mathrm d\alpha = \int_{-\infty}^{\infty}\left[\Phi(\alpha)\right]^n \phi(\alpha-\mu)\,\mathrm d\alpha$$ where $\phi(\cdot)$ is the standard normal density function. There is no closed-form expression for the value of this integral which must be evaluated numerically. Engineers are also interested in the complementary event -- that the decision is in error -- but do not like to compute this as $$P\{X_0 < \max_i X_i\} = P(E) = 1-P(C)$$ because this requires very careful evaluation of the integral for $P(C)$ to an accuracy of many significant digits, and such evaluation is both difficult and time-consuming. Instead, the integral for $1-P(C)$ can be integrated by parts to get $$P\{X_0 < \max_i X_i\} = \int_{-\infty}^{\infty} n \left[\Phi(\alpha)\right]^{n-1}\phi(\alpha) \Phi(\alpha - \mu)\,\mathrm d\alpha.$$ This integral is more easy to evaluate numerically, and its value as a function of $\mu$ is graphed and tabulated (though unfortunately only for $n \leq 20$) in Chapter 5 of Telecommunication Systems Engineering by Lindsey and Simon, Prentice-Hall 1973, Dover Press 1991. Alternatively, engineers use the union bound or Bonferroni inequality $$\begin{align*} P\{X_0 < \max_i X_i\} &= P\left\{(X_0 < X_1)\cup (X_0 < X_2) \cup \cdots \cup (X_0 < X_n)\right\}\\ &\leq \sum_{i=1}^{n}P\{X_0 < X_i\}\\ &= nQ\left(\frac{\mu}{\sqrt{2}}\right) \end{align*}$$ where $Q(x) = 1-\Phi(x)$ is the complementary cumulative normal distribution function.
From the union bound, we see that the desired value $0.01$ for $P\{X_0 < \max_i X_i\}$ is bounded above by $60\cdot Q(\mu/\sqrt{2})$ which bound has value $0.01$ at $\mu = 5.09\ldots$. This is slightly larger than the more exact value $\mu = 4.919\ldots$ obtained by @whuber by numerical integration.
More discussion and details about $M$-ary orthogonal signaling can be found on pp. 161-179 of my lecture notes for a class on communication systems'
| cite | improve this answer | |
$\endgroup$
5
$\begingroup$
A formal answer:
The probability distribution (density) for the maximum of $N$ i.i.d. variates is: $p_N(x)= N p(x) \Phi^{N-1}(x)$ where $p$ is the probability density and $\Phi$ is the cumulative distribution function.
From this you can calculate the probability that $X_0$ is greater than the $N-1$ other ones via $ P(E) = (N-1) \int_{-\infty}^{\infty} \int_y^{\infty} p(x_0) p(y) \Phi^{N-2}(y) dx_0 dy$
You may need to look into various approximations in order to tractably deal with this for your specific application.
| cite | improve this answer | |
$\endgroup$
• 7
$\begingroup$ +1 Actually, the double integral simplifies into a single integral since $$\int_y^\infty p(x_0)\,\mathrm dx_0 = 1 - \Phi(y-\mu)$$ giving $$P(E) = 1 - (N-1)\int_{-\infty}^\infty \Phi^{N-2}(y)p(y)\Phi(y-\mu)\,\mathrm dy$$ which is the same as in my answer. $\endgroup$ – Dilip Sarwate Nov 15 '12 at 23:35
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.986916 |
Fussball Tabelle ?
• ты в гооглe смотрeл ? там этих таблиц вагон.
как я понял ты хочeш на свой страницe выставить таблицу , гдe будут стоять актуальныe пункты / игры футбольных команд.
что-то типо такого :
<table border="1" width="58%" id="table1">
<tr>
<td width="333"><b>Spiel</b></td>
<td><b>Punkte</b></td>
</tr>
<tr>
<td width="333">Hamburg - Borussia</td>
<td>0:1</td>
</tr>
<tr>
<td width="333">Hannover - Braunschweig</td>
<td>2.0</td>
</tr>
<tr>
<td width="333">Zydryshinsk - kolxoz puti Il'icha</td>
<td>10:0</td>
</tr>
</table>
или я тeбя нe правeлъно понял?
• Не, нужна таблица по положению в турнирной таблице !!! А не по результатам.
• я тeбя нe сильно понимаю (футболом нe интeрeсуюсь)
тeбe что надо ? чисто данныe этой таблицы , которых ты найти нe можeш или тeбe сама таблица с этими данными нужна?
и eсли таблица сама , то ты eэ в ручную прописывать у сeбя на страницe хочeш или чтобы она автоматичeски откуда-нибудь скачивалась и актуализировалась.
• Цитата
Со слов пользователя slava004
я тeбя нe сильно понимаю (футболом нe интeрeсуюсь)
тeбe что надо ? чисто данныe этой таблицы , которых ты найти нe можeш или тeбe сама таблица с этими данными нужна?
и eсли таблица сама , то ты eэ в ручную прописывать у сeбя на страницe хочeш или чтобы она автоматичeски откуда-нибудь скачивалась и актуализировалась.
Мне нужна турнирная таблица!! Примерно так выглядит:
Пожалуйста зарегистрируйся для просмотра данной ссылки на страницу.
• Цитата
Со слов пользователя slava004
ну и что теперь тебе надо с этой таблицы? сам HТМЛ код?
Я же написал уже сверху....либо Setup ,либо HTML code и не обязательно с этой таблицы!!
Сообщение было отредактировано 1 раз, последнее редактирование пользователем UNIVERSAL ().
• v takom zhe duxe prodolzhaesh vvodit' stroki (Zeilen) , skol'ko nado
Сообщение было отредактировано 2 раз, последнее редактирование пользователем slava004 ().
• какой промежуток? между чем?
клетки длинные или что?
если клетки, то уменьш длину.
размеры все те что и у них на странице
p.s sdelaj screenshot
Сообщение было отредактировано 1 раз, последнее редактирование пользователем slava004 ().
• Привет !!! Нет ну скажем к примеру.
Пишу у себя на сайте Fussball Tabelle
В низу к примеру хочу прописать тот что ты мне предложил HTML Code. Ну скажем так чтобы от Названия Fussball Tabelle в низ было см 2. Прописываю тот HTML Code он удаляется в низ на см 10 !! И между Fussball Tabelle и тем HTML code большой просвет!!
• Вот паритесь :))))
Тебе дали код? - Дали! Скачай к нему css и будет тебе всё в шоколаде :)
<div class="parent chrome2 single1">
<div class="child c1 first">
<div class="article_subhead">
<div class="article_title"><big><strong>1. Bundesliga Tabelle</strong></big></div>
</div>
<h2></h2>
<table cellspacing="0" cellpadding="2" border="0">
<tr>
<td class="rowtop" width="200" colspan="2"> </td>
<td align="center" width="30"><strong>SP</strong></td>
<td align="center" width="30"><strong>S</strong></td>
<td align="center" width="30"><strong>U</strong></td>
<td align="center" width="30"><strong>N</strong></td>
<td align="center" width="50"><strong>T</strong></td>
<td align="center" width="40"><strong>DIFF</strong></td>
<td align="center" width="40"><strong>PKT</strong></td>
</tr>
<tr>
<td class="row_1" align="center" width="25">1</td>
<td class="row_1" align="left" width="175">Bayern Mьnchen</td>
<td class="row_1" align="center" width="30">3</td>
<td class="row_1" align="center" width="30">3</td>
<td class="row_1" align="center" width="30">0</td>
<td class="row_1" align="center" width="30">0</td>
<td class="row_1" align="center" width="50">10:0</td>
<td class="row_1" align="center" width="50">10</td>
<td class="row_1" align="center" width="40">9</td>
</tr>
<tr>
<td class="row_2" align="center" width="25">2</td>
<td class="row_2" align="left" width="175">Arminia Bielefeld</td>
<td class="row_2" align="center" width="30">3</td>
<td class="row_2" align="center" width="30">2</td>
<td class="row_2" align="center" width="30">1</td>
<td class="row_2" align="center" width="30">0</td>
<td class="row_2" align="center" width="50">7:3</td>
<td class="row_2" align="center" width="50">4</td>
<td class="row_2" align="center" width="40">7</td>
</tr>
<tr>
<td class="row_3" align="center" width="25">3</td>
<td class="row_3" align="left" width="175">VfL Bochum</td>
<td class="row_3" align="center" width="30">3</td>
<td class="row_3" align="center" width="30">2</td>
<td class="row_3" align="center" width="30">1</td>
<td class="row_3" align="center" width="30">0</td>
<td class="row_3" align="center" width="50">6:4</td>
<td class="row_3" align="center" width="50">2</td>
<td class="row_3" align="center" width="40">7</td>
</tr>
<tr>
<td class="row_4" align="center" width="25">4</td>
<td class="row_4" align="left" width="175">Eintracht Frankfurt</td>
<td class="row_4" align="center" width="30">3</td>
<td class="row_4" align="center" width="30">2</td>
<td class="row_4" align="center" width="30">1</td>
<td class="row_4" align="center" width="30">0</td>
<td class="row_4" align="center" width="50">4:2</td>
<td class="row_4" align="center" width="50">2</td>
<td class="row_4" align="center" width="40">7</td>
</tr>
<tr>
<td class="row_5" align="center" width="25">5</td>
<td class="row_5" align="left" width="175">FC Schalke 04</td>
<td class="row_5" align="center" width="30">4</td>
<td class="row_5" align="center" width="30">1</td>
<td class="row_5" align="center" width="30">3</td>
<td class="row_5" align="center" width="30">0</td>
<td class="row_5" align="center" width="50">8:5</td>
<td class="row_5" align="center" width="50">3</td>
<td class="row_5" align="center" width="40">6</td>
</tr>
<tr>
<td class="row_6" align="center" width="25">6</td>
<td class="row_6" align="left" width="175">Hamburger SV</td>
<td class="row_6" align="center" width="30">3</td>
<td class="row_6" align="center" width="30">2</td>
<td class="row_6" align="center" width="30">0</td>
<td class="row_6" align="center" width="30">1</td>
<td class="row_6" align="center" width="50">3:2</td>
<td class="row_6" align="center" width="50">1</td>
<td class="row_6" align="center" width="40">6</td>
</tr>
<tr>
<td class="row_7" align="center" width="25">7</td>
<td class="row_7" align="left" width="175">Bayer Leverkusen</td>
<td class="row_7" align="center" width="30">4</td>
<td class="row_7" align="center" width="30">1</td>
<td class="row_7" align="center" width="30">2</td>
<td class="row_7" align="center" width="30">1</td>
<td class="row_7" align="center" width="50">4:2</td>
<td class="row_7" align="center" width="50">2</td>
<td class="row_7" align="center" width="40">5</td>
</tr>
<tr>
<td class="row_8" align="center" width="25">8</td>
<td class="row_8" align="left" width="175">VfL Wolfsburg</td>
<td class="row_8" align="center" width="30">3</td>
<td class="row_8" align="center" width="30">1</td>
<td class="row_8" align="center" width="30">1</td>
<td class="row_8" align="center" width="30">1</td>
<td class="row_8" align="center" width="50">5:5</td>
<td class="row_8" align="center" width="50">0</td>
<td class="row_8" align="center" width="40">4</td>
</tr>
<tr>
<td class="row_9" align="center" width="25">9</td>
<td class="row_9" align="left" width="175">VfB Stuttgart</td>
<td class="row_9" align="center" width="30">3</td>
<td class="row_9" align="center" width="30">1</td>
<td class="row_9" align="center" width="30">1</td>
<td class="row_9" align="center" width="30">1</td>
<td class="row_9" align="center" width="50">4:5</td>
<td class="row_9" align="center" width="50">-1</td>
<td class="row_9" align="center" width="40">4</td>
</tr>
<tr>
<td class="row_10" align="center" width="25">10</td>
<td class="row_10" align="left" width="175">Werder Bremen</td>
<td class="row_10" align="center" width="30">3</td>
<td class="row_10" align="center" width="30">1</td>
<td class="row_10" align="center" width="30">1</td>
<td class="row_10" align="center" width="30">1</td>
<td class="row_10" align="center" width="50">3:6</td>
<td class="row_10" align="center" width="50">-3</td>
<td class="row_10" align="center" width="40">4</td>
</tr>
<tr>
<td class="row_11" align="center" width="25">11</td>
<td class="row_11" align="left" width="175">MSV Duisburg</td>
<td class="row_11" align="center" width="30">3</td>
<td class="row_11" align="center" width="30">1</td>
<td class="row_11" align="center" width="30">0</td>
<td class="row_11" align="center" width="30">2</td>
<td class="row_11" align="center" width="50">4:5</td>
<td class="row_11" align="center" width="50">-1</td>
<td class="row_11" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_12" align="center" width="25">12</td>
<td class="row_12" align="left" width="175">Hertha BSC</td>
<td class="row_12" align="center" width="30">3</td>
<td class="row_12" align="center" width="30">1</td>
<td class="row_12" align="center" width="30">0</td>
<td class="row_12" align="center" width="30">2</td>
<td class="row_12" align="center" width="50">3:4</td>
<td class="row_12" align="center" width="50">-1</td>
<td class="row_12" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_13" align="center" width="25">13</td>
<td class="row_13" align="left" width="175">Borussia Dortmund</td>
<td class="row_13" align="center" width="30">3</td>
<td class="row_13" align="center" width="30">1</td>
<td class="row_13" align="center" width="30">0</td>
<td class="row_13" align="center" width="30">2</td>
<td class="row_13" align="center" width="50">5:7</td>
<td class="row_13" align="center" width="50">-2</td>
<td class="row_13" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_14" align="center" width="25">14</td>
<td class="row_14" align="left" width="175">Karlsruher SC</td>
<td class="row_14" align="center" width="30">3</td>
<td class="row_14" align="center" width="30">1</td>
<td class="row_14" align="center" width="30">0</td>
<td class="row_14" align="center" width="30">2</td>
<td class="row_14" align="center" width="50">3:5</td>
<td class="row_14" align="center" width="50">-2</td>
<td class="row_14" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_15" align="center" width="25">15</td>
<td class="row_15" align="left" width="175">1. FC Nьrnberg</td>
<td class="row_15" align="center" width="30">3</td>
<td class="row_15" align="center" width="30">1</td>
<td class="row_15" align="center" width="30">0</td>
<td class="row_15" align="center" width="30">2</td>
<td class="row_15" align="center" width="50">2:4</td>
<td class="row_15" align="center" width="50">-2</td>
<td class="row_15" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_16" align="center" width="25">16</td>
<td class="row_16" align="left" width="175">Hannover 96</td>
<td class="row_16" align="center" width="30">3</td>
<td class="row_16" align="center" width="30">1</td>
<td class="row_16" align="center" width="30">0</td>
<td class="row_16" align="center" width="30">2</td>
<td class="row_16" align="center" width="50">2:5</td>
<td class="row_16" align="center" width="50">-3</td>
<td class="row_16" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_17" align="center" width="25">17</td>
<td class="row_17" align="left" width="175">Energie Cottbus</td>
<td class="row_17" align="center" width="30">3</td>
<td class="row_17" align="center" width="30">0</td>
<td class="row_17" align="center" width="30">1</td>
<td class="row_17" align="center" width="30">2</td>
<td class="row_17" align="center" width="50">1:5</td>
<td class="row_17" align="center" width="50">-4</td>
<td class="row_17" align="center" width="40">1</td>
</tr>
<tr>
<td class="row_18" align="center" width="25">18</td>
<td class="row_18" align="left" width="175">Hansa Rostock</td>
<td class="row_18" align="center" width="30">3</td>
<td class="row_18" align="center" width="30">0</td>
<td class="row_18" align="center" width="30">0</td>
<td class="row_18" align="center" width="30">3</td>
<td class="row_18" align="center" width="50">1:6</td>
<td class="row_18" align="center" width="50">-5</td>
<td class="row_18" align="center" width="40">0</td>
</tr>
</table>
</div>
</div>
.sp_imgtext {
color: #777777;
background-color: #EBF3FB;
}
.lpp_day {
color: #777777;
}
.rowtop { font-weight: bold;}
.row_1 { BACKGROUND-COLOR:#e5f2fa; color:#000000;}
.row_2 { BACKGROUND-COLOR:#e5f2fa; color:#000000;}
.row_3 { BACKGROUND-COLOR:#e5f2fa; color:#000000;}
.row_4 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_5 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_6 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_7 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_8 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_9 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_10 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_11 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_12 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_13 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_14 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_15 { BACKGROUND-COLOR:#ffffff; color:#000000;}
.row_16 { BACKGROUND-COLOR:#eeeeee; color:#000000;}
.row_17 { BACKGROUND-COLOR:#eeeeee; color:#000000;}
.row_18 { BACKGROUND-COLOR:#eeeeee; color:#000000;}
/** breadcrumb */
div.breadcrumb { background-color:transparent; margin-left:15px; margin-top:10px; font-size:11px; color:#000000; }
span.breadcrumb { color:#000000; font-size:11px; }
a.breadcrumb { color:#006699; font-size:11px; }
a.breadcrumb:link { color:#006699; font-size:11px; }
a.breadcrumb:visited { color:#006699; font-size:11px; }
a.breadcrumb:active { color:#006699; font-size:11px; }
a.breadcrumb:hover { color:#006699; font-size:11px; }
/** article archiv */
.om_news_single1 {padding:4px 5px;border-bottom:1px solid #aacbee;text-align:right;}
.om_news_single2 {padding:4px 5px;border-bottom:1px solid #aacbee;text-align:right;}
.om_news_arch_title {float:left;text-align:left;}
/* Artikel Editor */
.article_fontsize1 {font-size:8pt;}
.article_fontsize2 {font-size:10pt;}
.article_fontsize3 {font-size:12pt;}
.article_fontsize4 {font-size:14pt;}
.article_fontsize5 {font-size:18pt;}
.article_fontsize6 {font-size:24pt;}
.article_fontsize7 {font-size:36pt;}
.article_indent {margin-left:20px;}
p {padding-bottom: 9px;}
• Не киряшись дали,взяли!!! Я в этом деле новичок...так что многого незнаю,нет чтобы пояснить ...что и как ,и что за css ? C чем его едят? и где скачать?
• Специалисты,програмисты... подскажите в чем причина почему получается такой разрыв : Где нужно зделать установку для HTML Сode... чтобы это все как то сблизить?
1. Kreisklasse St 2 Tabelle
И скажем только здесь начинается Турнирная таблица!!!
Вот этот HTML Code :
<div class="parent chrome2 single1">
<div class="child c1 first">
<div class="article_subhead">
<div class="article_title"><big><strong>1. Kreisklasse St 2 Tabelle</strong></big></div>
</div>
<h2></h2>
<table cellspacing="0" cellpadding="2" border="0">
<tr>
<td class="rowtop" width="200" colspan="2"> </td>
<td align="center" width="30"><strong>SP</strong></td>
<td align="center" width="30"><strong>S</strong></td>
<td align="center" width="30"><strong>U</strong></td>
<td align="center" width="30"><strong>N</strong></td>
<td align="center" width="50"><strong>T</strong></td>
<td align="center" width="40"><strong>DIFF</strong></td>
<td align="center" width="40"><strong>PKT</strong></td>
</tr>
<tr>
<td class="row_1" align="center" width="25">1</td>
<td class="row_1" align="left" width="175">SW Lцwensen</td>
<td class="row_1" align="center" width="30">5</td>
<td class="row_1" align="center" width="30">5</td>
<td class="row_1" align="center" width="30">0</td>
<td class="row_1" align="center" width="30">0</td>
<td class="row_1" align="center" width="50">16:3</td>
<td class="row_1" align="center" width="50">13</td>
<td class="row_1" align="center" width="40">15</td>
</tr>
<tr>
<td class="row_2" align="center" width="25">2</td>
<td class="row_2" align="left" width="175">SSG Halvestorf- Herkendorf II</td>
<td class="row_2" align="center" width="30">5</td>
<td class="row_2" align="center" width="30">4</td>
<td class="row_2" align="center" width="30">1</td>
<td class="row_2" align="center" width="30">0</td>
<td class="row_2" align="center" width="50">24:6</td>
<td class="row_2" align="center" width="50">18</td>
<td class="row_2" align="center" width="40">13</td>
</tr>
<tr>
<td class="row_3" align="center" width="25">3</td>
<td class="row_3" align="left" width="175">Germania Reher</td>
<td class="row_3" align="center" width="30">5</td>
<td class="row_3" align="center" width="30">4</td>
<td class="row_3" align="center" width="30">0</td>
<td class="row_3" align="center" width="30">1</td>
<td class="row_3" align="center" width="50">19:8</td>
<td class="row_3" align="center" width="50">11</td>
<td class="row_3" align="center" width="40">12</td>
</tr>
<tr>
<td class="row_4" align="center" width="25">4</td>
<td class="row_4" align="left" width="175">SG Bergdцrfer/ Lichtenh</td>
<td class="row_4" align="center" width="30">4</td>
<td class="row_4" align="center" width="30">3</td>
<td class="row_4" align="center" width="30">0</td>
<td class="row_4" align="center" width="30">1</td>
<td class="row_4" align="center" width="50">13:7</td>
<td class="row_4" align="center" width="50">6</td>
<td class="row_4" align="center" width="40">9</td>
</tr>
<tr>
<td class="row_5" align="center" width="25">5</td>
<td class="row_5" align="left" width="175">TUS Germania Hagen II</td>
<td class="row_5" align="center" width="30">5</td>
<td class="row_5" align="center" width="30">3</td>
<td class="row_5" align="center" width="30">0</td>
<td class="row_5" align="center" width="30">2</td>
<td class="row_5" align="center" width="50">12:11</td>
<td class="row_5" align="center" width="50">1</td>
<td class="row_5" align="center" width="40">9</td>
</tr>
<tr>
<td class="row_6" align="center" width="25">6</td>
<td class="row_6" align="left" width="175">SSV Kцnigsfцrde</td>
<td class="row_6" align="center" width="30">4</td>
<td class="row_6" align="center" width="30">2</td>
<td class="row_6" align="center" width="30">1</td>
<td class="row_6" align="center" width="30">1</td>
<td class="row_6" align="center" width="50">10:9</td>
<td class="row_6" align="center" width="50">1</td>
<td class="row_6" align="center" width="40">7</td>
</tr>
<tr>
<td class="row_7" align="center" width="25">7</td>
<td class="row_7" align="left" width="175">RW Thal</td>
<td class="row_7" align="center" width="30">5</td>
<td class="row_7" align="center" width="30">2</td>
<td class="row_7" align="center" width="30">1</td>
<td class="row_7" align="center" width="30">2</td>
<td class="row_7" align="center" width="50">10:12</td>
<td class="row_7" align="center" width="50">-2</td>
<td class="row_7" align="center" width="40">7</td>
</tr>
<tr>
<td class="row_8" align="center" width="25">8</td>
<td class="row_8" align="left" width="175">SPVGG Bad Pyrmont II</td>
<td class="row_8" align="center" width="30">5</td>
<td class="row_8" align="center" width="30">2</td>
<td class="row_8" align="center" width="30">0</td>
<td class="row_8" align="center" width="30">3</td>
<td class="row_8" align="center" width="50">16:9</td>
<td class="row_8" align="center" width="50">7</td>
<td class="row_8" align="center" width="40">6</td>
</tr>
<tr>
<td class="row_9" align="center" width="25">9</td>
<td class="row_9" align="left" width="175">TuS Hessisch Oldendorf</td>
<td class="row_9" align="center" width="30">5</td>
<td class="row_9" align="center" width="30">2</td>
<td class="row_9" align="center" width="30">0</td>
<td class="row_9" align="center" width="30">3</td>
<td class="row_9" align="center" width="50">8:9</td>
<td class="row_9" align="center" width="50">-1</td>
<td class="row_9" align="center" width="40">6</td>
</tr>
<tr>
<td class="row_10" align="center" width="25">10</td>
<td class="row_10" align="left" width="175">SF Amelgatzen</td>
<td class="row_10" align="center" width="30">4</td>
<td class="row_10" align="center" width="30">2</td>
<td class="row_10" align="center" width="30">0</td>
<td class="row_10" align="center" width="30">2</td>
<td class="row_10" align="center" width="50">9:11</td>
<td class="row_10" align="center" width="50">-2</td>
<td class="row_10" align="center" width="40">6</td>
</tr>
<tr>
<td class="row_11" align="center" width="25">11</td>
<td class="row_11" align="left" width="175">TSG Emmerthal II</td>
<td class="row_11" align="center" width="30">4</td>
<td class="row_11" align="center" width="30">1</td>
<td class="row_11" align="center" width="30">0</td>
<td class="row_11" align="center" width="30">3</td>
<td class="row_11" align="center" width="50">5:11</td>
<td class="row_11" align="center" width="50">-6</td>
<td class="row_11" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_12" align="center" width="25">12</td>
<td class="row_12" align="left" width="175">TUS Rohden- Segelhorst II</td>
<td class="row_12" align="center" width="30">5</td>
<td class="row_12" align="center" width="30">1</td>
<td class="row_12" align="center" width="30">0</td>
<td class="row_12" align="center" width="30">4</td>
<td class="row_12" align="center" width="50">9:24</td>
<td class="row_12" align="center" width="50">-15</td>
<td class="row_12" align="center" width="40">3</td>
</tr>
<tr>
<td class="row_13" align="center" width="25">13</td>
<td class="row_13" align="left" width="175">FSG Wickbolsen</td>
<td class="row_13" align="center" width="30">5</td>
<td class="row_13" align="center" width="30">0</td>
<td class="row_13" align="center" width="30">1</td>
<td class="row_13" align="center" width="30">4</td>
<td class="row_13" align="center" width="50">6:16</td>
<td class="row_13" align="center" width="50">-10</td>
<td class="row_13" align="center" width="40">1</td>
</tr>
<tr>
<td class="row_14" align="center" width="25">14</td>
<td class="row_14" align="left" width="175">SV Hajen</td>
<td class="row_14" align="center" width="30">5</td>
<td class="row_14" align="center" width="30">0</td>
<td class="row_14" align="center" width="30">0</td>
<td class="row_14" align="center" width="30">5</td>
<td class="row_14" align="center" width="50">4:25</td>
<td class="row_14" align="center" width="50">-21</td>
<td class="row_14" align="center" width="40">0</td>
</tr>
<tr>
</table>
</div>
</div>
Сообщение было отредактировано 1 раз, последнее редактирование пользователем UNIVERSAL ().
• Цитата
Со слов пользователя slava004
в шeстой строкe сотри <h2></h2>
h - стоит для Ueberschrift.
там у тeбя пусто , но мeсто зарeзeрвировано, поэтому и получаeтся что одна строка пуская мeжду таблицeй и названиeм
Cтёр но все как и прежде без изменений!! Глянь вот на странице..
Пожалуйста зарегистрируйся для просмотра данной ссылки на страницу.
До самого низу просмотри!!
Сообщение было отредактировано 1 раз, последнее редактирование пользователем UNIVERSAL ().
• Млин мужики да кто нибудь подскажет в чем причина? Неужели нет таких кто знает что то по HTML ? Мужики,а может девушки ,женщины выручайте !! Ну незнаю в чем дело почему такой разрыв получается между тем что указал сверху. | __label__pos | 0.809045 |
Skip to content
Object Binding
Info
This topic should only be used for application targeting Mobile Development Kit 4.2 or older.
From Mobile Development Kit 4.3 release or newer, the application now supports unified binding syntax. For more information, see Binding.
Object bindings are strings enclosed in '{}' that contains a path to a property name of the current context's binding-object that are interpreted at runtime. An object binding can be composed of segments that resolve a complex data type. The value of the final segment is also the result of the binding. Object bindings are often used in metadata to retrieve data that bound to the current context of the action or control or page. You can set object binding to any control's or action's properties that accept Object Binding as value
Simple Object Binding
Example:
Assuming your current context's binding is as follows:
{
"OrderId": "12345",
"ProductName": "Product A"
}
Then you can use this binding in your metadata properties:
"Title": "{OrderId}"
Complex Object Binding
You can also using '/' as separator for multi-level data structure (e.g. for Complex OData Type)
Example:
Assuming your current context's binding-data is as follows:
{
"EmployeeID": "12345",
"EmployeeName": "Product A",
"Address": {
"City": "CityA",
"Street": {
"StreetName": "StreetB",
"HouseNo": "1"
}
},
"Roles": [{
"RoleName": "Role A",
"RoleFunction": "Func A"
},
{
"RoleName": "Role B",
"RoleFunction": "Func B"
}
]
}
You can bind these properties in your metadata:
"Title": "{EmployeeName}",
"Subhead": "{Address/City}",
"Description": "{Address/Street/StreetName}"
You can also bind property of an item in an array by using number in the binding path:
"Title": "{Roles/0/RoleName}"
Combining Multiple Object Bindings
You can't combine multiple object binding in a single property e.g. The following is not supported:
"Subhead": "Address: {Address/City} {Address/Street/StreetName}",
You must use Dynamic Target Path to achieve that.
Target Path vs. Binding vs. Dynamic Target Path
When specifying the value for bind-able properties in the metadata, the following are equivalent:
• "#Property:myVar" is a "simple target path" that -- as described in this document -- will return the value of the myVar property in the current object context using the #Property segment.
• "{myVar}" is a "regular binding specifier" that is shorthand for the target path above.
• "{{#Property:myVar}}" is a "dynamic target path" this is commonly used to create fancy display strings by inserting them inside other strings. For example: "{{#Property:TeamName}} - {{#Property:City}}" could resolve to "Cubs - Chicago" when the client evaluates each of the inserted target paths. In this case, since it is only the one, it would resolve to the same value as the other two.
Last update: December 16, 2020 | __label__pos | 0.974106 |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with HTTPS or Subversion.
Download ZIP
Comparing changes
Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.
Open a pull request
Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
base fork: mozilla/kuma
...
head fork: mozilla/kuma
Checking mergeability… Don't worry, you can still create the pull request.
• 7 commits
• 5 files changed
• 0 commit comments
• 3 contributors
View
70 apps/wiki/helpers.py
@@ -1,3 +1,5 @@
+# coding=utf-8
+
import difflib
import re
import urllib
@@ -5,6 +7,7 @@
import constance.config
from jingo import register
import jinja2
+from pyquery import PyQuery as pq
from tidylib import tidy_document
from tower import ugettext as _
import logging
@@ -13,6 +16,53 @@
from wiki import DIFF_WRAP_COLUMN
+def get_seo_description(content, locale=None):
+ # Create an SEO summary
+ # TODO: Google only takes the first 180 characters, so maybe we find a
+ # logical way to find the end of sentence before 180?
+ seo_summary = ''
+ try:
+ if content:
+ # Need to add a BR to the page content otherwise pyQuery wont find
+ # a <p></p> element if it's the only element in the doc_html
+ seo_analyze_doc_html = content + '<br />'
+ page = pq(seo_analyze_doc_html)
+
+ # Look for the SEO summary class first
+ summaryClasses = page.find('.seoSummary')
+ if len(summaryClasses):
+ seo_summary = summaryClasses.text()
+ else:
+ paragraphs = page.find('p')
+ if paragraphs.length:
+ for p in range(len(paragraphs)):
+ item = paragraphs.eq(p)
+ text = item.text()
+ # Checking for a parent length of 2
+ # because we don't want p's wrapped
+ # in DIVs ("<div class='warning'>") and pyQuery adds
+ # "<html><div>" wrapping to entire document
+ if (len(text) and
+ not 'Redirect' in text and
+ text.find(u'«') == -1 and
+ text.find('«') == -1 and
+ item.parents().length == 2):
+ seo_summary = text.strip()
+ break
+ except:
+ pass
+
+ # Post-found cleanup
+ # remove markup chars
+ seo_summary = seo_summary.replace('<', '').replace('>', '')
+ # remove spaces around some punctuation added by PyQuery
+ if locale == 'en-US':
+ seo_summary = re.sub(r' ([,\)\.])', r'\1', seo_summary)
+ seo_summary = re.sub(r'(\() ', r'\1', seo_summary)
+
+ return seo_summary
+
+
def compare_url(doc, from_id, to_id):
return (reverse('wiki.compare_revisions', args=[doc.full_path],
locale=doc.locale)
@@ -84,10 +134,13 @@ def _massage_diff_content(content):
def bugize_text(content):
content = jinja2.escape(content)
content = re.sub(r'bug\s+#?(\d+)',
- jinja2.Markup('<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=\\1" target="_blank">bug \\1</a>'),
+ jinja2.Markup('<a href="https://bugzilla.mozilla.org/'
+ 'show_bug.cgi?id=\\1" '
+ 'target="_blank">bug \\1</a>'),
content)
return content
+
@register.function
def format_comment(rev):
""" Massages revision comment content after the fact """
@@ -97,7 +150,10 @@ def format_comment(rev):
# If a page move, say so
if prev_rev and prev_rev.slug != rev.slug:
- comment += jinja2.Markup('<span class="slug-change">Moved From <strong>%s</strong> to <strong>%s</strong></span>') % (prev_rev.slug, rev.slug)
+ comment += jinja2.Markup('<span class="slug-change">'
+ 'Moved From <strong>%s</strong> '
+ 'to <strong>%s</strong></span>') % (
+ prev_rev.slug, rev.slug)
return comment
@@ -112,11 +168,11 @@ def diff_table(content_from, content_to, prev_id, curr_id):
to_lines = tidy_to.splitlines()
try:
diff = html_diff.make_table(from_lines, to_lines,
- _("Revision %s") % prev_id,
- _("Revision %s") % curr_id,
- context=True,
- numlines=constance.config.DIFF_CONTEXT_LINES
- )
+ _("Revision %s") % prev_id,
+ _("Revision %s") % curr_id,
+ context=True,
+ numlines=constance.config.DIFF_CONTEXT_LINES
+ )
except RuntimeError:
# some diffs hit a max recursion error
message = _(u'There was an error generating the content.')
View
50 apps/wiki/tests/test_helpers.py
@@ -0,0 +1,50 @@
+from nose.tools import eq_
+from test_utils import TestCase
+
+from wiki.helpers import get_seo_description
+
+
+class GetSEODescriptionTests(TestCase):
+
+ def test_html_elements_spaces(self):
+ # No spaces with html tags
+ content = (u'<p><span class="seoSummary">The <strong>Document Object '
+ 'Model'
+ '</strong> (<strong>DOM</strong>) is an API for '
+ '<a href="/en-US/docs/HTML" title="en-US/docs/HTML">HTML</a> and '
+ '<a href="/en-US/docs/XML" title="en-US/docs/XML">XML</a> '
+ 'documents. It provides a structural representation of the '
+ 'document, enabling you to modify its content and visual '
+ 'presentation by using a scripting language such as '
+ '<a href="/en-US/docs/JavaScript" '
+ 'title="https://developer.mozilla.org/en-US/docs/JavaScript">'
+ 'JavaScript</a>.</span></p>')
+ expected = ('The Document Object Model (DOM) is an API for HTML and '
+ 'XML'
+ ' documents. It provides a structural representation of the'
+ ' document, enabling you to modify its content and visual'
+ ' presentation by using a scripting language such as'
+ ' JavaScript.')
+ eq_(expected, get_seo_description(content, 'en-US'))
+
+ content = (u'<p><span class="seoSummary"><strong>Cascading Style '
+ 'Sheets</strong>, most of the time abbreviated in '
+ '<strong>CSS</strong>, is a '
+ '<a href="/en-US/docs/DOM/stylesheet">stylesheet</a> '
+ 'language used to describe the presentation of a document '
+ 'written in <a href="/en-US/docs/HTML" title="The '
+ 'HyperText Mark-up Language">HTML</a></span> or <a '
+ 'href="/en-US/docs/XML" title="en-US/docs/XML">XML</a> '
+ '(including various XML languages like <a '
+ 'href="/en-US/docs/SVG" title="en-US/docs/SVG">SVG</a> or '
+ '<a href="/en-US/docs/XHTML" '
+ 'title="en-US/docs/XHTML">XHTML</a>)<span '
+ 'class="seoSummary">. CSS describes how the structured '
+ 'element must be rendered on screen, on paper, in speech, '
+ 'or on other media.</span></p>')
+ expected = ('Cascading Style Sheets, most of the time abbreviated in '
+ 'CSS, is a stylesheet language used to describe the '
+ 'presentation of a document written in HTML. CSS '
+ 'describes how the structured element must be rendered on '
+ 'screen, on paper, in speech, or on other media.')
+ eq_(expected, get_seo_description(content, 'en-US'))
View
48 apps/wiki/tests/test_views.py
@@ -711,20 +711,11 @@ def my_post(url, timeout=None, headers=None, data=None):
ok_(False, "Data wasn't posted as utf8")
-class DocumentEditingTests(TestCaseBase):
- """Tests for the document-editing view"""
+class DocumentSEOTests(TestCaseBase):
+ """Tests for the document seo logic"""
fixtures = ['test_users.json']
- def test_noindex_post(self):
- client = LocalizingClient()
- client.login(username='admin', password='testpass')
-
- # Go to new document page to ensure no-index header works
- response = client.get(reverse('wiki.new_document', args=[],
- locale=settings.WIKI_DEFAULT_LANGUAGE))
- eq_(response['X-Robots-Tag'], 'noindex')
-
def test_seo_title(self):
client = LocalizingClient()
client.login(username='admin', password='testpass')
@@ -809,6 +800,21 @@ def make_page_and_compare_seo(slug, content, aught_preview):
' <a href="blah">A link</a> is also <cool></p>',
'I am awesome. A link is also cool')
+
+class DocumentEditingTests(TestCaseBase):
+ """Tests for the document-editing view"""
+
+ fixtures = ['test_users.json']
+
+ def test_noindex_post(self):
+ client = LocalizingClient()
+ client.login(username='admin', password='testpass')
+
+ # Go to new document page to ensure no-index header works
+ response = client.get(reverse('wiki.new_document', args=[],
+ locale=settings.WIKI_DEFAULT_LANGUAGE))
+ eq_(response['X-Robots-Tag'], 'noindex')
+
def test_create_on_404(self):
client = LocalizingClient()
client.login(username='admin', password='testpass')
@@ -1323,19 +1329,28 @@ def test_invalid_slug_translate(inv_slug, url, data):
def _run_translate_edit_tests(edit_slug, edit_data, edit_doc):
# Hit the initial URL
- response = client.get(reverse('wiki.edit_document', args=[edit_doc.slug], locale=foreign_locale))
+ response = client.get(reverse('wiki.edit_document',
+ args=[edit_doc.slug],
+ locale=foreign_locale))
eq_(200, response.status_code)
page = pq(response.content)
eq_(edit_data['slug'], page.find('input[name=slug]')[0].value)
- # Attempt an invalid edit of the root, ensure the slug stays the same (i.e. no parent prepending)
+ # Attempt an invalid edit of the root, ensure the slug stays
+ # the same (i.e. no parent prepending)
edit_data['slug'] = invalid_slug
edit_data['form'] = 'both'
- response = client.post(reverse('wiki.edit_document', args=[edit_doc.slug], locale=foreign_locale), edit_data)
+ response = client.post(reverse('wiki.edit_document',
+ args=[edit_doc.slug],
+ locale=foreign_locale),
+ edit_data)
eq_(200, response.status_code) # 200 = bad, invalid data
page = pq(response.content)
- eq_(invalid_slug, page.find('input[name=slug]')[0].value) # Slug doesn't add parent
- self.assertContains(response, page.find('ul.errorlist li a[href="#id_slug"]').text())
+ # Slug doesn't add parent
+ eq_(invalid_slug, page.find('input[name=slug]')[0].value)
+ self.assertContains(response, page.find('ul.errorlist li'
+ ' a[href="#id_slug"]').
+ text())
eq_(0, len(Document.objects.filter(title=edit_data['title'] + ' Redirect 1', locale=foreign_locale))) # Ensure no redirect
# Push a valid edit, without changing the slug
@@ -3239,6 +3254,7 @@ def test_attachment_raw_requires_attachment_host(self):
url = attachment.get_file_url()
resp = self.client.get(url, HTTP_HOST=settings.ATTACHMENT_HOST)
+ eq_('ALLOW-FROM: %s' % settings.DOMAIN, resp['x-frame-options'])
eq_(200, resp.status_code)
def test_attachment_detail(self):
View
68 apps/wiki/views.py
@@ -1,11 +1,7 @@
# coding=utf-8
from datetime import datetime
-import time
import json
-from collections import defaultdict
-import base64
-import httplib
import hashlib
import logging
from urllib import urlencode
@@ -16,11 +12,6 @@
except:
from StringIO import StringIO
-import requests
-import bleach
-
-from taggit.utils import parse_tags, edit_string_for_tags
-
try:
from functools import wraps
except ImportError:
@@ -44,7 +35,6 @@
import constance.config
from waffle.decorators import waffle_flag
-from waffle import flag_is_active
import jingo
from tower import ugettext_lazy as _lazy
@@ -57,9 +47,9 @@
from access.decorators import permission_required, login_required
from sumo.helpers import urlparams
-from sumo.urlresolvers import Prefixer, reverse
+from sumo.urlresolvers import reverse
from sumo.utils import paginate, smart_int
-from wiki import (DOCUMENTS_PER_PAGE, TEMPLATE_TITLE_PREFIX, ReadOnlyException)
+from wiki import (DOCUMENTS_PER_PAGE, TEMPLATE_TITLE_PREFIX)
from wiki.decorators import check_readonly
from wiki.events import (EditDocumentEvent, ReviewableRevisionInLocaleEvent,
ApproveRevisionInLocaleEvent)
@@ -68,24 +58,21 @@
TreeMoveForm)
from wiki.models import (Document, Revision, HelpfulVote, EditorToolbar,
DocumentTag, ReviewTag, Attachment,
- DocumentRenderingInProgress,
DocumentRenderedContentNotAvailable,
CATEGORIES,
OPERATING_SYSTEMS, GROUPED_OPERATING_SYSTEMS,
FIREFOX_VERSIONS, GROUPED_FIREFOX_VERSIONS,
- REVIEW_FLAG_TAGS_DEFAULT, ALLOWED_ATTRIBUTES,
- ALLOWED_TAGS, ALLOWED_STYLES,
+ REVIEW_FLAG_TAGS_DEFAULT,
DOCUMENT_LAST_MODIFIED_CACHE_KEY_TMPL,
get_current_or_latest_revision)
-from wiki.tasks import send_reviewed_notification, schedule_rebuild_kb
-from wiki.helpers import format_comment
+from wiki.tasks import send_reviewed_notification
+from wiki.helpers import format_comment, get_seo_description
import wiki.content
from wiki import kumascript
from pyquery import PyQuery as pq
from django.utils.safestring import mark_safe
-import logging
log = logging.getLogger('k.wiki')
@@ -303,48 +290,6 @@ def _get_document_for_json(doc, addLocaleToTitle=False):
}
-def get_seo_description(content):
- # Create an SEO summary
- # TODO: Google only takes the first 180 characters, so maybe we find a
- # logical way to find the end of sentence before 180?
- seo_summary = ''
- try:
- if content:
- # Need to add a BR to the page content otherwise pyQuery wont find
- # a <p></p> element if it's the only element in the doc_html
- seo_analyze_doc_html = content + '<br />'
- page = pq(seo_analyze_doc_html)
-
- # Look for the SEO summary class first
- summaryClasses = page.find('.seoSummary')
- if len(summaryClasses):
- seo_summary = summaryClasses.text()
- else:
- paragraphs = page.find('p')
- if paragraphs.length:
- for p in range(len(paragraphs)):
- item = paragraphs.eq(p)
- text = item.text()
- # Checking for a parent length of 2
- # because we don't want p's wrapped
- # in DIVs ("<div class='warning'>") and pyQuery adds
- # "<html><div>" wrapping to entire document
- if (len(text) and
- not 'Redirect' in text and
- text.find(u'«') == -1 and
- text.find('«') == -1 and
- item.parents().length == 2):
- seo_summary = text.strip()
- break
- except:
- pass
-
- # Post-found cleanup
- seo_summary = seo_summary.replace('<', '').replace('>', '')
-
- return seo_summary
-
-
@csrf_exempt
@require_http_methods(['GET', 'PUT', 'HEAD'])
@accepts_auth_key
@@ -578,7 +523,7 @@ def set_common_headers(r):
# Get the SEO summary
seo_summary = ''
if not doc.is_template:
- seo_summary = get_seo_description(doc_html)
+ seo_summary = get_seo_description(doc_html, doc.locale)
# Get the additional title information, if necessary
seo_parent_title = ''
@@ -2093,6 +2038,7 @@ def raw_file(request, attachment_id, filename):
resp = HttpResponse(rev.file.read(), mimetype=rev.mime_type)
resp["Last-Modified"] = rev.created
resp["Content-Length"] = rev.file.size
+ resp['x-frame-options'] = 'ALLOW-FROM: %s' % settings.DOMAIN
return resp
else:
return HttpResponsePermanentRedirect(attachment.get_file_url())
View
2 media/css/mdn-screen.css
@@ -423,7 +423,7 @@ footer .languages { float: right; text-align: right; margin: 0 0 .5em; }
.home-promos .promo .more { font-size: 1.3em; }
.home-promos .promo p { width: 125px; }
.home-promos .promo div { position: absolute; z-index: 5; height: 163px; width: 218px; background: url("../img/bg-homepromos.png") no-repeat;
- opacity: .6;
+ opacity: 1;
-moz-transition-property: opacity, background-position;
-moz-transition-duration: 0.5s;
-webkit-transition-property: opacity, background-position;
No commit comments for this range
Something went wrong with that request. Please try again. | __label__pos | 0.979309 |
Getting Started
This section outlines the steps to implement the mobile app and TV App functionality necessary to support the Device Interface. These implementation steps include:
Creating an Application Key
Yahoo Connected TV issues and authenticates all mobile apps that connect to the Engine by requiring developers to obtain application developer keys from Yahoo. Each time a new device requests access to the Engine, a check is made to Yahoo's authorization service. This step allows Yahoo to revoke access from malicious applications. Before initiating communication, the mobile app must be authorized and all messages must be encrypted using SSL.
Each Device Communication mobile app needs to have its own application key. Follow these steps to generate a unique application key:
1. Go to https://developer.yahoo.com/ .
2. Click on My Projects on the right side of the screen.
3. Login with your Yahoo ID.
4. Accept the Yahoo Terms of Use.
5. Click on New Project.
6. Select Standard and Continue.
7. Fill out the Application Name in the web form.
8. Select Client/Desktop as the Kind of Application.
9. Fill out the Description field in the web form.
10. Select the first choice for public access from the Access Scopes radio button.
11. Check the Terms of Use checkbox.
12. Click on the Get API Key button.
13. In the upper left corner of the screen, under your application's name, save the Application ID (referred to as appId below).
14. In the Authentication Information section, save the Consumer Key (referred to as consumerKey below) and the Consumer Secret (referred to as consumerSecret below).
15. Compute the Hash-Based Message Authentication Code using the SHA1 hash function (referred to as secret below) as follows:
secret = HMAC-SHA1(consumerSecret, consumerKey)
16. Construct the Application Key from the values above:
application_key = "app_id="+appId+"&consumer_key="+consumerKey+"&secret="+secret
17. Use the resulting value for application_key as your Application Key in the First Time Authentication Device Interface Protocol as follows:
SESSION|CREATE|application_key|device_name|END
There is no prescribed order for the arguments for application_key, your result should be similar to the following example for the test client:
Getting the Software
The following software is required to test Device Communication:
• The Yahoo Connected TV App Development Kit (ADK) which includes the DNS Service Discovery library: libdns_sd.so
• The pybonjour package which provides a pure-Python interface to Apple Bonjour and compatible DNS-SD libraries (such as Avahi)
• Client-side Python test script: client.py
Installing the Software
To install the software for Device Communication:
• Follow the steps outlined in the Installation Guide to install the ADK.
• Install pybonjour if you want to use Bonjour discovery in your mobile app using these steps:
Configuring the Port
Configure the SSL port by adding the following setting to the config-oem.xml file:
The default port number is 8099.
Running a Simple Test
To run the client test use the following command syntax:
python ./client.py [<hostname> <port>]
To connect through the YCTV Discovery Service, do not specify the hostname and port.
python ./client.py
To connect directly, bypassing the YCTV Discovery Service, specify the hostname and port.
python ./client.py localhost 8099
Type q for quit, then cntrl+c, then cntrl+z, and then kill %1 respectively if previous exiting commands fail.
Table of Contents | __label__pos | 0.632705 |
Example analysis of the difference between cookie and session in PHP, _php tutorial of Cookie instance analysis
Source: Internet
Author: User
Tags http cookie php session php print php programming setcookie
Example analysis of the difference between cookie and session in PHP, cookie instance analysis
Cookies and sessions are very important techniques in PHP programming. In-depth understanding and mastering the application of cookies and sessions is the basis for PHP programming. This article analyzes the difference between the two in the form of examples. The specific analysis is as follows:
1.Cookie
A cookie is a mechanism for storing data on a remote browser to track and identify users.
PHP sends a cookie in the header of the HTTP protocol, so the Setcookie () function must be called before other information is exported to the browser, similar to the limit on the header () function.
1.1 Setting Cookies:
You can use the Setcookie () or Setrawcookie () function to set the cookie. It can also be set by sending HTTP headers directly to the client.
1.1.1 Use the Setcookie () function to set the cookie:
BOOL Setcookie (string name [, string value [, int expire [, String path [, string domain [, bool secure [, BOOL HttpOnly ]]]]]] )
Name:cookie variable Name
The value of the Value:cookie variable
Expire: The time at which the validity period ends,
Path: Valid directory,
Domain: Valid domain name, top-level domain unique
Secure: If the value is 1, the cookie is valid only on HTTPS connections, and HTTP and HTTPS are available if the default value is 0.
Example:
<?php$value = ' something from somewhere '; Setcookie ("TestCookie", $value); /* Simple cookie setting */setcookie ("TestCookie", $value, Time () +3600); /* Valid for 1 hours */setcookie ("TestCookie", $value, Time () +3600, "/~rasmus/", ". example.com", 1); /* Valid directory/~rasmus, valid domain name example.com and all sub-domains */?>
Set multiple cookie variables: Setcookie (' var[a] ', ' value '), use an array to represent the variable, but his subscript is not quoted. This allows you to read the cookie variable with $_cookie[' var ' [' a '].
1.1.2. Setting a cookie using the header ();
Header ("Set-cookie:name= $value [;p ath= $path [;d omain=xxx.com[; ...]];
The following parameters are the same as those listed above for the Setcookie function.
Like what:
$value = ' something from somewhere '; Header ("Set-cookie:name= $value");
1.2 Cookie read:
The browser-side cookie can be read directly with PHP's built-in Super global variable $_cookie.
The above example sets the cookie "TestCookie" and now we are reading:
Print $_cookie[' TestCookie '];
Is the cookie being exported?!
1.3 Deleting cookies
Just set the valid time to less than the current time, and leave the value blank. For example:
Setcookie ("name", "", Time ()-1);
Similar to the header ().
1.4 Frequently Asked questions resolved:
1) There is an error when using Setcookie (), possibly because there is an output or a space in front of the call to Setcookie (). It is also possible that your document will be converted from another character set, with a BOM signature behind the document (that is, adding some hidden BOM characters to the file contents). The solution is to keep your documents from happening. There is also a point that can be handled by using the Ob_start () function.
2) $_cookie affected by MAGIC_QUOTES_GPC, may be automatically escaped
3) When using, it is necessary to test whether the user supports cookies
1.5 Cookie working mechanism:
Some learners are more impulsive and have no mind to study the principle, so I put it back.
A) The server sets a cookie (more than one cookie) in the client computer by sending an HTTP Set-cookie header in response.
b) The client automatically sends an HTTP cookie header to the server and the server receives the read.
http/1.x okx-powered-by:php/5.2.1set-cookie:testcookie=something from somewhere; Path=/expires:thu, 18:52:00 Gmtcache-control:no-store, No-cache, Must-revalidate, post-check=0, pre-check=0 Pragma:no-cachecontent-type:text/html
This line implements the cookie function, after receiving this line
set-cookie:testcookie=something from somewhere; path=/
The browser will create a cookie file on the client's disk, and write Inside:
testcookie=something from somewhere;
This line is the result of our use of Setcookie (' TestCookie ', ' Something from somewhere ', '/'); that is, with the header (' set-cookie:testcookie= something from somewhere; path=/'); the result.
2. Session
session uses a cookie that has an expiration time of 0, and a unique identifier called the Session ID (a long string of strings). The server-side synchronization generates some session files (you can define the save type of the session yourself) and connect with the user agency. The Web application stores the data associated with these sessions and lets the data pass through the pages as the user passes between them. The
visitor to the Web site is assigned a unique identifier, the so-called session ID. It is either stored on the client's cookie or passed through the URL.
Session support allows users to register any number of variables and keep them for use by individual requests. When a visitor visits a website, PHP automatically (if Session.auto_start is set to 1) or when the user requests (explicitly called by session_start () or Session_register () secretly calls) checks whether a specific session ID is sent in the request. If it is, the previously saved environment is rebuilt.
2.1 sessionid transfer
2.1.1 Transfer sessin ID via cookie
Using Session_Start () Call session, the server side generates session ID hash value and the default value is PHPSESSID session name, and sends the variable to the client (default). PHPSESSID (session name), which is a 128-bit hash value. The server side will interact with the client through this cookie.
The value of the session variable is stored in a text file on the server machine after the internal serialization of PHP, and the client's variable name is PHPSESSID by default for the coolie of the corresponding interaction.
That is, the server automatically sends an HTTP header:
Header (' Set-cookie:session_name () =session_id (); path=/');
That
Setcookie (Session_name (), session_id ());
When a new page jumps from the page and calls Session_Start (), PHP checks the session data for the server-side storage associated with the given ID, and creates a new dataset if it is not found.
2.1.2 Sending session ID via URL
This method is only used when the user prohibits the use of cookies, as browser cookies are already common and are not available for security purposes.
=<?php print session_id ()?> ">xxx,
You can also pass the session value via post.
2.2 Session Basic Usage example
<?php//Page1.phpsession_start (); Echo ' Welcome to page #1 ';/* Create session variable and assign value to session variable */$_session[' favcolor '] = ' Green '; $_session[' animal '] = ' cat '; $_session[' time ' = time ();//If the client uses cookies, the SESSION can be passed directly to Page2.phpecho '
Page 2 ';//If the client disables Cookieecho '
Page 2 ';/* By default php5.2.1, the SID will have a value only if the cookie is written, and if the corresponding cookie for that session already exists, then the SID is (undefined) null */?>
<?php//Page2.phpsession_start ();p rint $_session[' animal '; Print out a single sessionvar_dump ($_session); Print out the session value that page1.php passed over?>
2.3 Use the Session function to control page caching.
In many cases, we want to determine whether our web page is cached on the client, or to set the cache's effective time, such as some sensitive content on our web page and to log in to view, if cached locally, you can directly open the local cache can not log in and browse to the Web.
Use Session_cache_limiter (' private '); You can control the page client cache and must be called before Session_Start ().
Controls the client cache time with session_cache_expire (int), unit (s), and is also called before Session_Start ().
This is just a way to control the cache using the session, and we can also control the cache of the control page in the header ().
2.4 Delete Session
Be implemented in three steps.
<?phpsession_destroy (); The first step: Delete the server-side session file, which uses Setcookie (Session_name (), ", Time ()-3600); Step two: Delete the actual session:$_session = Array (); Step three: Delete the $_session global variable array?>
The use of 2.5 session in PHP large Web applications
For sites with large access, the default session storage method is not suitable, the current optimal method is to use the database access session. At this point, the function bool Session_set_save_handler (callback open, callback Close, callback read, callback write, callback destroy, callback GC) are the solutions that are provided to us to solve this problem.
The 6 functions used by the function are as follows:
1.bool open () opens the session storage mechanism,
2.bool Close () closes the session store operation.
3.MIXDE read () Use this function when loading session data from storage
4.bool write () writes all data for the given session ID to the store
5.bool destroy () destroys data associated with the specified session ID
6.bool GC () garbage collection of data in the storage system
See the PHP manual Session_set_save_handler () function for examples.
If you use a class to process, use the
session_set_save_handler (Array (' ClassName ', ' open '), array (' ClassName ', ' close '), Array (' ClassName ', ' read '), array (' ClassName ', ' write '), array (' ClassName ', ' destroy '), Array (' ClassName ', ' GC '),
The
calls the 6 static methods in the ClassName class. ClassName you can swap objects without calling static methods, but using static members does not produce objects, and it performs better.
2.6 commonly used session functions:
bool Session_Start (void); Initialize session
bool Session_destroy (void): Delete the server-side session Association file.
String session_id () The ID of the current session
String Session_name () the session name currently accessed, that is, the client saves the session The cookie name of the ID. Default PHPSESSID.
Array session_get_cookie_params () details of the session associated with this session.
String Session_cache_limiter () controls the client cache
INI Session_cache_expire () of the page that uses the session to control client cache time
BOOL Session_ Destroy () Delete the server-side file that holds session information
void session_set_cookie_params (int lifetime [, String path [, string domain [, bool SE Cure [, BOOL HttpOnly]]]) sets the details of the session associated with this session
BOOL Session_set_save_handler (callback open, callback Close, C Allback Read, callback write, callback destroy, callback GC) define the function that handles the session (not the default)
bool Session_regenerate_id ([ BOOL Delete_old_session]) assigns a new session ID
2.7 Session security issues
By investing a lot of effort in trying to get the valid session ID of an existing user, with the session ID, they are likely to have the same capabilities as this user in the system.
Therefore, our main approach is to validate the validity of session ID.
<?phpif (!isset ($_session[' user_agent ')) { $_session[' user_agent '] = $_server[' remote_addr '].$_SERVER[' Http_user_agent '];} /* If the user SESSION ID is forged */elseif ($_session[' user_agent ']! = $_server[' remote_addr ']. $_server[' Http_user_agent ']) { SESSION_REGENERATE_ID ();}? >
The 2.8 session is passed through a cookie and is passed through the SID differently:
In the case of the default configuration of the php5.2.1 session, when the session is generated, the server side will generate a pre-defined super global variable SID at the same time that the header Set-cookie is sent (that is, the write cookie and the thrown SID are equivalent.), when the $ _cookie[' Phpsessid ' is present, the COOKIE will no longer be written, and the Super global variable SID will no longer be generated, at which time the SID is empty.
2.9 Session Usage Example
<?php/*** validity of the SESSION **/function sessionverify () { if (!isset ($_session[' user_agent ')) { $_session[') User_agent '] = MD5 ($_server[' remote_addr ' ). $_server[' Http_user_agent ']); } /* If the user session ID is forged, reassign session ID * /elseif ($_session[' user_agent ']! = MD5 ($_server[' remote_addr ') . $_ server[' Http_user_agent ')) { session_regenerate_id (); }} /*** destroy session* Three steps perfect realization, non-leakage **/function Sessiondestroy () { Session_destroy (); Setcookie (Session_name (), ", Time () -3600); $_session = Array ();}? >
Note: The session header information has been issued for the same reason as a cookie.
In PhP5, the registry configuration options for all PHP sessions are programmable, and in general we do not need to modify their configuration. To learn about the session registry configuration options for PHP, refer to the session handler function at the manual.
It is believed that this article has a certain reference value for understanding the usage of cookies and session in PHP.
PHP session how to understand, and the difference between the cookie where? There are no specific examples
Session is a temporary user data saved on a server side of a Web service stateless session, which allows the server to reconstruct user session information.
Cookies are data retention functions that adapt to local script temporary data storage and session authentication with server-side interaction
Simply put, the session requires a cookie to be enabled for normal use.
When crawling an HTTP packet, it is found that cookie:phpsessid=xxxx is sent when the page content is requested, and Set-cookie:phpsessid=xxxx is included in the header information that is returned. If you change the value of this cookie in the header information, it will cause your user login status to change, because the server side can not find the corresponding session file according to the value of PHPSESSID.
If the server side only consider the initial html+ script to consider the way, there is no session of the file, because it is a static page, does not have a follow-up relationship with the server (put aside the AJAX request). So the cookie becomes the local storage file for the script to run. The cookie exists in the form of "Key name = key value", with ";" Separated.
The difference between the duration length:
A cookie has a defined duration that is longer than the length of time that the browser will consider outdated and will discard and delete this cookie file. Therefore, even if the server side of the session still exists, because the cookie information has been lost, unable to retrieve the corresponding PHPSESSID value to achieve the reconstruction of the session. If the timeout length is not defined, it is automatically invalidated when the browser is closed.
Session can specify the period of existence, if the time limit is exceeded, the PHPSESSID value of this cookie in the corresponding session has a request will automatically lengthen the length of time, until more than the length of the request will be cleared through the recovery mechanism, but not fully guaranteed to be properly recycled. If the cookie file is still stored locally, it cannot be rebuilt because the session file for the corresponding PHPSESSID is no longer present.
(PHP) session differs from Cookie
Session stores cookies for the server as client storage.
Yes, the biggest difference between the session and the cookie here is what I've done with the code and the relevant understanding.
Code:
a1.php
function Cookiestest ($newValue) {
if (!isset ($_cookie["Cookiestest")) {
Setcookie (' Cookiestest ', $newValue, time () + 3600);
echo "Cookievalue:". $_cookie["Cookievalue"];
}
}
function Sessiontest ($newValue) {
if (!session_is_registered (' sessiontest ')) {
Session_register ("Sessiontest");
}
}
Cookiestest ("hellocookies!");
Sessiontest ("hellosession!");
echo "Cookievalue:". Print_r ($_cookie). "
";
echo "Cookievalue:". $_cookie["Cookiestest"]. "
";
$SessionTest = "DD";
Echo $SessionTest;
echo $_session["Sessiontest"];
?>
a2.php
Session_Start ();
echo $_session["Sessiontest"];
Echo $CookiesTest;
?>
Cookies:
(1) Used to store continuous access to a page. (That is, the value of the cookie to the ground is not the concept of a real global change, that is, for a1.php by adjusting $_cookie["XX") can invoke the corresponding cookie value, However, if you open a a2.php IE browser again, then take the cookie value will not be taken out! Therefore, it is not a global concept in real sense for cookies. )
(2) The cookie is stored in the client and is stored in the user win's temp directory for the cookie.
Session: (A special cookie, when the cookie is banned, the session will be banned, but the session can be redirected through the way to regain)
(1) A unique variable that can be used to store a user's global. For the session, you can redirect and get the value of the session via Session_Start (), and do the operation without having to browse for it to be opened again. If the above a1.php the session of the operation, if you open an IE after the use of Sessoin_start (), after the session corresponding variable will be re-enabled ... Remaining full text >>
http://www.bkjia.com/PHPjc/871100.html www.bkjia.com true http://www.bkjia.com/PHPjc/871100.html techarticle the analysis of the difference between cookie and session in PHP, cookie instance analysis cookie and session are very important techniques in PHP programming. An in-depth understanding and mastery of cookie and session applications is carried out ...
• Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | __label__pos | 0.537511 |
The Business Process Model (BPM) interface
Defining processes describes how to define a process and associated status rules by writing JSON by hand. Although this is an entirely feasible method, it can quickly become complicated and is difficult for other people to understand the process.
The business process model (BPM) interface is a set of additional tools that make it easier to define a process and status rules. There are two parts to the BPM interface:
• A convention for defining a process using higher-level BPM step types (as opposed to the execution step types in the manual JSON). This has the following advantages:
• The BPM process supports roles, allowing processes to be defined that span more than one partner, and which will eventually be represented by multiple workers.
• The BPM process can include both flow rules (i.e. the "next" processing using the "state" propery) and status rules (used for menus), allowing both execution flow and user flow to be captured in the same model.
• The BPM step types are simplified versions of the underlying execution step types, and a single BPM step type may represent multiple execution steps. This allows common combinations of execution steps to be included as a single BPM step.
• The BPM step types allow for additional parameterisation, allowing a process definition to be configured for different uses.
• The process used to convert the BPM definition into the executable process definition supports bringing in other resources, such as form definitions and files, allowing these to be included easily in processes.
• An interface from modelling tools such as the Camunda Modeler application (see https://camunda.com/products/camunda-bpm/modeler/) which allows a BPM to be built graphically. Please note that Metrici have no affiliation with Camunda and our use of their product does not imply any endorsement of us by them.
The BPM processing and the Camunda interface are combined into a single node, the Camunda Importer, which interprets the Camunda data, converts this into a BPM model, and then uses a process compiler to convert the BPM model into a process definition and status rules.
The sections below provide a step-by-step account of how to build a flow using Camunda Modeler. You can find reference information in the Metrici development guide, Business process model (BPM) interface section. | __label__pos | 0.726736 |
blob: 341d52aea8e1b01258b706ee07b03215532065fa [file] [log] [blame]
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
import (
"internal/bytealg"
"internal/cpu"
"runtime/internal/atomic"
"runtime/internal/sys"
"unsafe"
)
var buildVersion = sys.TheVersion
// set using cmd/go/internal/modload.ModInfoProg
var modinfo string
// Goroutine scheduler
// The scheduler's job is to distribute ready-to-run goroutines over worker threads.
//
// The main concepts are:
// G - goroutine.
// M - worker thread, or machine.
// P - processor, a resource that is required to execute Go code.
// M must have an associated P to execute Go code, however it can be
// blocked or in a syscall w/o an associated P.
//
// Design doc at https://golang.org/s/go11sched.
// Worker thread parking/unparking.
// We need to balance between keeping enough running worker threads to utilize
// available hardware parallelism and parking excessive running worker threads
// to conserve CPU resources and power. This is not simple for two reasons:
// (1) scheduler state is intentionally distributed (in particular, per-P work
// queues), so it is not possible to compute global predicates on fast paths;
// (2) for optimal thread management we would need to know the future (don't park
// a worker thread when a new goroutine will be readied in near future).
//
// Three rejected approaches that would work badly:
// 1. Centralize all scheduler state (would inhibit scalability).
// 2. Direct goroutine handoff. That is, when we ready a new goroutine and there
// is a spare P, unpark a thread and handoff it the thread and the goroutine.
// This would lead to thread state thrashing, as the thread that readied the
// goroutine can be out of work the very next moment, we will need to park it.
// Also, it would destroy locality of computation as we want to preserve
// dependent goroutines on the same thread; and introduce additional latency.
// 3. Unpark an additional thread whenever we ready a goroutine and there is an
// idle P, but don't do handoff. This would lead to excessive thread parking/
// unparking as the additional threads will instantly park without discovering
// any work to do.
//
// The current approach:
// We unpark an additional thread when we ready a goroutine if (1) there is an
// idle P and there are no "spinning" worker threads. A worker thread is considered
// spinning if it is out of local work and did not find work in global run queue/
// netpoller; the spinning state is denoted in m.spinning and in sched.nmspinning.
// Threads unparked this way are also considered spinning; we don't do goroutine
// handoff so such threads are out of work initially. Spinning threads do some
// spinning looking for work in per-P run queues before parking. If a spinning
// thread finds work it takes itself out of the spinning state and proceeds to
// execution. If it does not find work it takes itself out of the spinning state
// and then parks.
// If there is at least one spinning thread (sched.nmspinning>1), we don't unpark
// new threads when readying goroutines. To compensate for that, if the last spinning
// thread finds work and stops spinning, it must unpark a new spinning thread.
// This approach smooths out unjustified spikes of thread unparking,
// but at the same time guarantees eventual maximal CPU parallelism utilization.
//
// The main implementation complication is that we need to be very careful during
// spinning->non-spinning thread transition. This transition can race with submission
// of a new goroutine, and either one part or another needs to unpark another worker
// thread. If they both fail to do that, we can end up with semi-persistent CPU
// underutilization. The general pattern for goroutine readying is: submit a goroutine
// to local work queue, #StoreLoad-style memory barrier, check sched.nmspinning.
// The general pattern for spinning->non-spinning transition is: decrement nmspinning,
// #StoreLoad-style memory barrier, check all per-P work queues for new work.
// Note that all this complexity does not apply to global run queue as we are not
// sloppy about thread unparking when submitting to global queue. Also see comments
// for nmspinning manipulation.
var (
m0 m
g0 g
mcache0 *mcache
raceprocctx0 uintptr
)
//go:linkname runtime_inittask runtime..inittask
var runtime_inittask initTask
//go:linkname main_inittask main..inittask
var main_inittask initTask
// main_init_done is a signal used by cgocallbackg that initialization
// has been completed. It is made before _cgo_notify_runtime_init_done,
// so all cgo calls can rely on it existing. When main_init is complete,
// it is closed, meaning cgocallbackg can reliably receive from it.
var main_init_done chan bool
//go:linkname main_main main.main
func main_main()
// mainStarted indicates that the main M has started.
var mainStarted bool
// runtimeInitTime is the nanotime() at which the runtime started.
var runtimeInitTime int64
// Value to use for signal mask for newly created M's.
var initSigmask sigset
// The main goroutine.
func main() {
g := getg()
// Racectx of m0->g0 is used only as the parent of the main goroutine.
// It must not be used for anything else.
g.m.g0.racectx = 0
// Max stack size is 1 GB on 64-bit, 250 MB on 32-bit.
// Using decimal instead of binary GB and MB because
// they look nicer in the stack overflow failure message.
if sys.PtrSize == 8 {
maxstacksize = 1000000000
} else {
maxstacksize = 250000000
}
// Allow newproc to start new Ms.
mainStarted = true
if GOARCH != "wasm" { // no threads on wasm yet, so no sysmon
systemstack(func() {
newm(sysmon, nil, -1)
})
}
// Lock the main goroutine onto this, the main OS thread,
// during initialization. Most programs won't care, but a few
// do require certain calls to be made by the main thread.
// Those can arrange for main.main to run in the main thread
// by calling runtime.LockOSThread during initialization
// to preserve the lock.
lockOSThread()
if g.m != &m0 {
throw("runtime.main not on m0")
}
doInit(&runtime_inittask) // must be before defer
if nanotime() == 0 {
throw("nanotime returning zero")
}
// Defer unlock so that runtime.Goexit during init does the unlock too.
needUnlock := true
defer func() {
if needUnlock {
unlockOSThread()
}
}()
// Record when the world started.
runtimeInitTime = nanotime()
gcenable()
main_init_done = make(chan bool)
if iscgo {
if _cgo_thread_start == nil {
throw("_cgo_thread_start missing")
}
if GOOS != "windows" {
if _cgo_setenv == nil {
throw("_cgo_setenv missing")
}
if _cgo_unsetenv == nil {
throw("_cgo_unsetenv missing")
}
}
if _cgo_notify_runtime_init_done == nil {
throw("_cgo_notify_runtime_init_done missing")
}
// Start the template thread in case we enter Go from
// a C-created thread and need to create a new thread.
startTemplateThread()
cgocall(_cgo_notify_runtime_init_done, nil)
}
doInit(&main_inittask)
close(main_init_done)
needUnlock = false
unlockOSThread()
if isarchive || islibrary {
// A program compiled with -buildmode=c-archive or c-shared
// has a main, but it is not executed.
return
}
fn := main_main // make an indirect call, as the linker doesn't know the address of the main package when laying down the runtime
fn()
if raceenabled {
racefini()
}
// Make racy client program work: if panicking on
// another goroutine at the same time as main returns,
// let the other goroutine finish printing the panic trace.
// Once it does, it will exit. See issues 3934 and 20018.
if atomic.Load(&runningPanicDefers) != 0 {
// Running deferred functions should not take long.
for c := 0; c < 1000; c++ {
if atomic.Load(&runningPanicDefers) == 0 {
break
}
Gosched()
}
}
if atomic.Load(&panicking) != 0 {
gopark(nil, nil, waitReasonPanicWait, traceEvGoStop, 1)
}
exit(0)
for {
var x *int32
*x = 0
}
}
// os_beforeExit is called from os.Exit(0).
//go:linkname os_beforeExit os.runtime_beforeExit
func os_beforeExit() {
if raceenabled {
racefini()
}
}
// start forcegc helper goroutine
func init() {
go forcegchelper()
}
func forcegchelper() {
forcegc.g = getg()
lockInit(&forcegc.lock, lockRankForcegc)
for {
lock(&forcegc.lock)
if forcegc.idle != 0 {
throw("forcegc: phase error")
}
atomic.Store(&forcegc.idle, 1)
goparkunlock(&forcegc.lock, waitReasonForceGCIdle, traceEvGoBlock, 1)
// this goroutine is explicitly resumed by sysmon
if debug.gctrace > 0 {
println("GC forced")
}
// Time-triggered, fully concurrent.
gcStart(gcTrigger{kind: gcTriggerTime, now: nanotime()})
}
}
//go:nosplit
// Gosched yields the processor, allowing other goroutines to run. It does not
// suspend the current goroutine, so execution resumes automatically.
func Gosched() {
checkTimeouts()
mcall(gosched_m)
}
// goschedguarded yields the processor like gosched, but also checks
// for forbidden states and opts out of the yield in those cases.
//go:nosplit
func goschedguarded() {
mcall(goschedguarded_m)
}
// Puts the current goroutine into a waiting state and calls unlockf.
// If unlockf returns false, the goroutine is resumed.
// unlockf must not access this G's stack, as it may be moved between
// the call to gopark and the call to unlockf.
// Reason explains why the goroutine has been parked.
// It is displayed in stack traces and heap dumps.
// Reasons should be unique and descriptive.
// Do not re-use reasons, add new ones.
func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int) {
if reason != waitReasonSleep {
checkTimeouts() // timeouts may expire while two goroutines keep the scheduler busy
}
mp := acquirem()
gp := mp.curg
status := readgstatus(gp)
if status != _Grunning && status != _Gscanrunning {
throw("gopark: bad g status")
}
mp.waitlock = lock
mp.waitunlockf = unlockf
gp.waitreason = reason
mp.waittraceev = traceEv
mp.waittraceskip = traceskip
releasem(mp)
// can't do anything that might move the G between Ms here.
mcall(park_m)
}
// Puts the current goroutine into a waiting state and unlocks the lock.
// The goroutine can be made runnable again by calling goready(gp).
func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int) {
gopark(parkunlock_c, unsafe.Pointer(lock), reason, traceEv, traceskip)
}
func goready(gp *g, traceskip int) {
systemstack(func() {
ready(gp, traceskip, true)
})
}
//go:nosplit
func acquireSudog() *sudog {
// Delicate dance: the semaphore implementation calls
// acquireSudog, acquireSudog calls new(sudog),
// new calls malloc, malloc can call the garbage collector,
// and the garbage collector calls the semaphore implementation
// in stopTheWorld.
// Break the cycle by doing acquirem/releasem around new(sudog).
// The acquirem/releasem increments m.locks during new(sudog),
// which keeps the garbage collector from being invoked.
mp := acquirem()
pp := mp.p.ptr()
if len(pp.sudogcache) == 0 {
lock(&sched.sudoglock)
// First, try to grab a batch from central cache.
for len(pp.sudogcache) < cap(pp.sudogcache)/2 && sched.sudogcache != nil {
s := sched.sudogcache
sched.sudogcache = s.next
s.next = nil
pp.sudogcache = append(pp.sudogcache, s)
}
unlock(&sched.sudoglock)
// If the central cache is empty, allocate a new one.
if len(pp.sudogcache) == 0 {
pp.sudogcache = append(pp.sudogcache, new(sudog))
}
}
n := len(pp.sudogcache)
s := pp.sudogcache[n-1]
pp.sudogcache[n-1] = nil
pp.sudogcache = pp.sudogcache[:n-1]
if s.elem != nil {
throw("acquireSudog: found s.elem != nil in cache")
}
releasem(mp)
return s
}
//go:nosplit
func releaseSudog(s *sudog) {
if s.elem != nil {
throw("runtime: sudog with non-nil elem")
}
if s.isSelect {
throw("runtime: sudog with non-false isSelect")
}
if s.next != nil {
throw("runtime: sudog with non-nil next")
}
if s.prev != nil {
throw("runtime: sudog with non-nil prev")
}
if s.waitlink != nil {
throw("runtime: sudog with non-nil waitlink")
}
if s.c != nil {
throw("runtime: sudog with non-nil c")
}
gp := getg()
if gp.param != nil {
throw("runtime: releaseSudog with non-nil gp.param")
}
mp := acquirem() // avoid rescheduling to another P
pp := mp.p.ptr()
if len(pp.sudogcache) == cap(pp.sudogcache) {
// Transfer half of local cache to the central cache.
var first, last *sudog
for len(pp.sudogcache) > cap(pp.sudogcache)/2 {
n := len(pp.sudogcache)
p := pp.sudogcache[n-1]
pp.sudogcache[n-1] = nil
pp.sudogcache = pp.sudogcache[:n-1]
if first == nil {
first = p
} else {
last.next = p
}
last = p
}
lock(&sched.sudoglock)
last.next = sched.sudogcache
sched.sudogcache = first
unlock(&sched.sudoglock)
}
pp.sudogcache = append(pp.sudogcache, s)
releasem(mp)
}
// funcPC returns the entry PC of the function f.
// It assumes that f is a func value. Otherwise the behavior is undefined.
// CAREFUL: In programs with plugins, funcPC can return different values
// for the same function (because there are actually multiple copies of
// the same function in the address space). To be safe, don't use the
// results of this function in any == expression. It is only safe to
// use the result as an address at which to start executing code.
//go:nosplit
func funcPC(f interface{}) uintptr {
return *(*uintptr)(efaceOf(&f).data)
}
// called from assembly
func badmcall(fn func(*g)) {
throw("runtime: mcall called on m->g0 stack")
}
func badmcall2(fn func(*g)) {
throw("runtime: mcall function returned")
}
func badreflectcall() {
panic(plainError("arg size to reflect.call more than 1GB"))
}
var badmorestackg0Msg = "fatal: morestack on g0\n"
//go:nosplit
//go:nowritebarrierrec
func badmorestackg0() {
sp := stringStructOf(&badmorestackg0Msg)
write(2, sp.str, int32(sp.len))
}
var badmorestackgsignalMsg = "fatal: morestack on gsignal\n"
//go:nosplit
//go:nowritebarrierrec
func badmorestackgsignal() {
sp := stringStructOf(&badmorestackgsignalMsg)
write(2, sp.str, int32(sp.len))
}
//go:nosplit
func badctxt() {
throw("ctxt != 0")
}
func lockedOSThread() bool {
gp := getg()
return gp.lockedm != 0 && gp.m.lockedg != 0
}
var (
allgs []*g
allglock mutex
)
func allgadd(gp *g) {
if readgstatus(gp) == _Gidle {
throw("allgadd: bad status Gidle")
}
lock(&allglock)
allgs = append(allgs, gp)
allglen = uintptr(len(allgs))
unlock(&allglock)
}
const (
// Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
// 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
_GoidCacheBatch = 16
)
// cpuinit extracts the environment variable GODEBUG from the environment on
// Unix-like operating systems and calls internal/cpu.Initialize.
func cpuinit() {
const prefix = "GODEBUG="
var env string
switch GOOS {
case "aix", "darwin", "dragonfly", "freebsd", "netbsd", "openbsd", "illumos", "solaris", "linux":
cpu.DebugOptions = true
// Similar to goenv_unix but extracts the environment value for
// GODEBUG directly.
// TODO(moehrmann): remove when general goenvs() can be called before cpuinit()
n := int32(0)
for argv_index(argv, argc+1+n) != nil {
n++
}
for i := int32(0); i < n; i++ {
p := argv_index(argv, argc+1+i)
s := *(*string)(unsafe.Pointer(&stringStruct{unsafe.Pointer(p), findnull(p)}))
if hasPrefix(s, prefix) {
env = gostring(p)[len(prefix):]
break
}
}
}
cpu.Initialize(env)
// Support cpu feature variables are used in code generated by the compiler
// to guard execution of instructions that can not be assumed to be always supported.
x86HasPOPCNT = cpu.X86.HasPOPCNT
x86HasSSE41 = cpu.X86.HasSSE41
x86HasFMA = cpu.X86.HasFMA
armHasVFPv4 = cpu.ARM.HasVFPv4
arm64HasATOMICS = cpu.ARM64.HasATOMICS
}
// The bootstrap sequence is:
//
// call osinit
// call schedinit
// make & queue new G
// call runtime·mstart
//
// The new G calls runtime·main.
func schedinit() {
lockInit(&sched.lock, lockRankSched)
lockInit(&sched.sysmonlock, lockRankSysmon)
lockInit(&sched.deferlock, lockRankDefer)
lockInit(&sched.sudoglock, lockRankSudog)
lockInit(&deadlock, lockRankDeadlock)
lockInit(&paniclk, lockRankPanic)
lockInit(&allglock, lockRankAllg)
lockInit(&allpLock, lockRankAllp)
lockInit(&reflectOffs.lock, lockRankReflectOffs)
lockInit(&finlock, lockRankFin)
lockInit(&trace.bufLock, lockRankTraceBuf)
lockInit(&trace.stringsLock, lockRankTraceStrings)
lockInit(&trace.lock, lockRankTrace)
lockInit(&cpuprof.lock, lockRankCpuprof)
lockInit(&trace.stackTab.lock, lockRankTraceStackTab)
// raceinit must be the first call to race detector.
// In particular, it must be done before mallocinit below calls racemapshadow.
_g_ := getg()
if raceenabled {
_g_.racectx, raceprocctx0 = raceinit()
}
sched.maxmcount = 10000
moduledataverify()
stackinit()
mallocinit()
fastrandinit() // must run before mcommoninit
mcommoninit(_g_.m, -1)
cpuinit() // must run before alginit
alginit() // maps must not be used before this call
modulesinit() // provides activeModules
typelinksinit() // uses maps, activeModules
itabsinit() // uses activeModules
msigsave(_g_.m)
initSigmask = _g_.m.sigmask
goargs()
goenvs()
parsedebugvars()
gcinit()
sched.lastpoll = uint64(nanotime())
procs := ncpu
if n, ok := atoi32(gogetenv("GOMAXPROCS")); ok && n > 0 {
procs = n
}
if procresize(procs) != nil {
throw("unknown runnable goroutine during bootstrap")
}
// For cgocheck > 1, we turn on the write barrier at all times
// and check all pointer writes. We can't do this until after
// procresize because the write barrier needs a P.
if debug.cgocheck > 1 {
writeBarrier.cgo = true
writeBarrier.enabled = true
for _, p := range allp {
p.wbBuf.reset()
}
}
if buildVersion == "" {
// Condition should never trigger. This code just serves
// to ensure runtime·buildVersion is kept in the resulting binary.
buildVersion = "unknown"
}
if len(modinfo) == 1 {
// Condition should never trigger. This code just serves
// to ensure runtime·modinfo is kept in the resulting binary.
modinfo = ""
}
}
func dumpgstatus(gp *g) {
_g_ := getg()
print("runtime: gp: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", readgstatus(gp), "\n")
print("runtime: g: g=", _g_, ", goid=", _g_.goid, ", g->atomicstatus=", readgstatus(_g_), "\n")
}
func checkmcount() {
// sched lock is held
if mcount() > sched.maxmcount {
print("runtime: program exceeds ", sched.maxmcount, "-thread limit\n")
throw("thread exhaustion")
}
}
// mReserveID returns the next ID to use for a new m. This new m is immediately
// considered 'running' by checkdead.
//
// sched.lock must be held.
func mReserveID() int64 {
if sched.mnext+1 < sched.mnext {
throw("runtime: thread ID overflow")
}
id := sched.mnext
sched.mnext++
checkmcount()
return id
}
// Pre-allocated ID may be passed as 'id', or omitted by passing -1.
func mcommoninit(mp *m, id int64) {
_g_ := getg()
// g0 stack won't make sense for user (and is not necessary unwindable).
if _g_ != _g_.m.g0 {
callers(1, mp.createstack[:])
}
lock(&sched.lock)
if id >= 0 {
mp.id = id
} else {
mp.id = mReserveID()
}
mp.fastrand[0] = uint32(int64Hash(uint64(mp.id), fastrandseed))
mp.fastrand[1] = uint32(int64Hash(uint64(cputicks()), ^fastrandseed))
if mp.fastrand[0]|mp.fastrand[1] == 0 {
mp.fastrand[1] = 1
}
mpreinit(mp)
if mp.gsignal != nil {
mp.gsignal.stackguard1 = mp.gsignal.stack.lo + _StackGuard
}
// Add to allm so garbage collector doesn't free g->m
// when it is just in a register or thread-local storage.
mp.alllink = allm
// NumCgoCall() iterates over allm w/o schedlock,
// so we need to publish it safely.
atomicstorep(unsafe.Pointer(&allm), unsafe.Pointer(mp))
unlock(&sched.lock)
// Allocate memory to hold a cgo traceback if the cgo call crashes.
if iscgo || GOOS == "solaris" || GOOS == "illumos" || GOOS == "windows" {
mp.cgoCallers = new(cgoCallers)
}
}
var fastrandseed uintptr
func fastrandinit() {
s := (*[unsafe.Sizeof(fastrandseed)]byte)(unsafe.Pointer(&fastrandseed))[:]
getRandomData(s)
}
// Mark gp ready to run.
func ready(gp *g, traceskip int, next bool) {
if trace.enabled {
traceGoUnpark(gp, traceskip)
}
status := readgstatus(gp)
// Mark runnable.
_g_ := getg()
mp := acquirem() // disable preemption because it can be holding p in a local var
if status&^_Gscan != _Gwaiting {
dumpgstatus(gp)
throw("bad g->status in ready")
}
// status is Gwaiting or Gscanwaiting, make Grunnable and put on runq
casgstatus(gp, _Gwaiting, _Grunnable)
runqput(_g_.m.p.ptr(), gp, next)
wakep()
releasem(mp)
}
// freezeStopWait is a large value that freezetheworld sets
// sched.stopwait to in order to request that all Gs permanently stop.
const freezeStopWait = 0x7fffffff
// freezing is set to non-zero if the runtime is trying to freeze the
// world.
var freezing uint32
// Similar to stopTheWorld but best-effort and can be called several times.
// There is no reverse operation, used during crashing.
// This function must not lock any mutexes.
func freezetheworld() {
atomic.Store(&freezing, 1)
// stopwait and preemption requests can be lost
// due to races with concurrently executing threads,
// so try several times
for i := 0; i < 5; i++ {
// this should tell the scheduler to not start any new goroutines
sched.stopwait = freezeStopWait
atomic.Store(&sched.gcwaiting, 1)
// this should stop running goroutines
if !preemptall() {
break // no running goroutines
}
usleep(1000)
}
// to be sure
usleep(1000)
preemptall()
usleep(1000)
}
// All reads and writes of g's status go through readgstatus, casgstatus
// castogscanstatus, casfrom_Gscanstatus.
//go:nosplit
func readgstatus(gp *g) uint32 {
return atomic.Load(&gp.atomicstatus)
}
// The Gscanstatuses are acting like locks and this releases them.
// If it proves to be a performance hit we should be able to make these
// simple atomic stores but for now we are going to throw if
// we see an inconsistent state.
func casfrom_Gscanstatus(gp *g, oldval, newval uint32) {
success := false
// Check that transition is valid.
switch oldval {
default:
print("runtime: casfrom_Gscanstatus bad oldval gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
dumpgstatus(gp)
throw("casfrom_Gscanstatus:top gp->status is not in scan state")
case _Gscanrunnable,
_Gscanwaiting,
_Gscanrunning,
_Gscansyscall,
_Gscanpreempted:
if newval == oldval&^_Gscan {
success = atomic.Cas(&gp.atomicstatus, oldval, newval)
}
}
if !success {
print("runtime: casfrom_Gscanstatus failed gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
dumpgstatus(gp)
throw("casfrom_Gscanstatus: gp->status is not in scan state")
}
releaseLockRank(lockRankGscan)
}
// This will return false if the gp is not in the expected status and the cas fails.
// This acts like a lock acquire while the casfromgstatus acts like a lock release.
func castogscanstatus(gp *g, oldval, newval uint32) bool {
switch oldval {
case _Grunnable,
_Grunning,
_Gwaiting,
_Gsyscall:
if newval == oldval|_Gscan {
r := atomic.Cas(&gp.atomicstatus, oldval, newval)
if r {
acquireLockRank(lockRankGscan)
}
return r
}
}
print("runtime: castogscanstatus oldval=", hex(oldval), " newval=", hex(newval), "\n")
throw("castogscanstatus")
panic("not reached")
}
// If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
// and casfrom_Gscanstatus instead.
// casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
// put it in the Gscan state is finished.
//go:nosplit
func casgstatus(gp *g, oldval, newval uint32) {
if (oldval&_Gscan != 0) || (newval&_Gscan != 0) || oldval == newval {
systemstack(func() {
print("runtime: casgstatus: oldval=", hex(oldval), " newval=", hex(newval), "\n")
throw("casgstatus: bad incoming values")
})
}
acquireLockRank(lockRankGscan)
releaseLockRank(lockRankGscan)
// See https://golang.org/cl/21503 for justification of the yield delay.
const yieldDelay = 5 * 1000
var nextYield int64
// loop if gp->atomicstatus is in a scan state giving
// GC time to finish and change the state to oldval.
for i := 0; !atomic.Cas(&gp.atomicstatus, oldval, newval); i++ {
if oldval == _Gwaiting && gp.atomicstatus == _Grunnable {
throw("casgstatus: waiting for Gwaiting but is Grunnable")
}
if i == 0 {
nextYield = nanotime() + yieldDelay
}
if nanotime() < nextYield {
for x := 0; x < 10 && gp.atomicstatus != oldval; x++ {
procyield(1)
}
} else {
osyield()
nextYield = nanotime() + yieldDelay/2
}
}
}
// casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.
// Returns old status. Cannot call casgstatus directly, because we are racing with an
// async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,
// it might have become Grunnable by the time we get to the cas. If we called casgstatus,
// it would loop waiting for the status to go back to Gwaiting, which it never will.
//go:nosplit
func casgcopystack(gp *g) uint32 {
for {
oldstatus := readgstatus(gp) &^ _Gscan
if oldstatus != _Gwaiting && oldstatus != _Grunnable {
throw("copystack: bad status, not Gwaiting or Grunnable")
}
if atomic.Cas(&gp.atomicstatus, oldstatus, _Gcopystack) {
return oldstatus
}
}
}
// casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
//
// TODO(austin): This is the only status operation that both changes
// the status and locks the _Gscan bit. Rethink this.
func casGToPreemptScan(gp *g, old, new uint32) {
if old != _Grunning || new != _Gscan|_Gpreempted {
throw("bad g transition")
}
acquireLockRank(lockRankGscan)
for !atomic.Cas(&gp.atomicstatus, _Grunning, _Gscan|_Gpreempted) {
}
}
// casGFromPreempted attempts to transition gp from _Gpreempted to
// _Gwaiting. If successful, the caller is responsible for
// re-scheduling gp.
func casGFromPreempted(gp *g, old, new uint32) bool {
if old != _Gpreempted || new != _Gwaiting {
throw("bad g transition")
}
return atomic.Cas(&gp.atomicstatus, _Gpreempted, _Gwaiting)
}
// stopTheWorld stops all P's from executing goroutines, interrupting
// all goroutines at GC safe points and records reason as the reason
// for the stop. On return, only the current goroutine's P is running.
// stopTheWorld must not be called from a system stack and the caller
// must not hold worldsema. The caller must call startTheWorld when
// other P's should resume execution.
//
// stopTheWorld is safe for multiple goroutines to call at the
// same time. Each will execute its own stop, and the stops will
// be serialized.
//
// This is also used by routines that do stack dumps. If the system is
// in panic or being exited, this may not reliably stop all
// goroutines.
func stopTheWorld(reason string) {
semacquire(&worldsema)
gp := getg()
gp.m.preemptoff = reason
systemstack(func() {
// Mark the goroutine which called stopTheWorld preemptible so its
// stack may be scanned.
// This lets a mark worker scan us while we try to stop the world
// since otherwise we could get in a mutual preemption deadlock.
// We must not modify anything on the G stack because a stack shrink
// may occur. A stack shrink is otherwise OK though because in order
// to return from this function (and to leave the system stack) we
// must have preempted all goroutines, including any attempting
// to scan our stack, in which case, any stack shrinking will
// have already completed by the time we exit.
casgstatus(gp, _Grunning, _Gwaiting)
stopTheWorldWithSema()
casgstatus(gp, _Gwaiting, _Grunning)
})
}
// startTheWorld undoes the effects of stopTheWorld.
func startTheWorld() {
systemstack(func() { startTheWorldWithSema(false) })
// worldsema must be held over startTheWorldWithSema to ensure
// gomaxprocs cannot change while worldsema is held.
semrelease(&worldsema)
getg().m.preemptoff = ""
}
// stopTheWorldGC has the same effect as stopTheWorld, but blocks
// until the GC is not running. It also blocks a GC from starting
// until startTheWorldGC is called.
func stopTheWorldGC(reason string) {
semacquire(&gcsema)
stopTheWorld(reason)
}
// startTheWorldGC undoes the effects of stopTheWorldGC.
func startTheWorldGC() {
startTheWorld()
semrelease(&gcsema)
}
// Holding worldsema grants an M the right to try to stop the world.
var worldsema uint32 = 1
// Holding gcsema grants the M the right to block a GC, and blocks
// until the current GC is done. In particular, it prevents gomaxprocs
// from changing concurrently.
//
// TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
// being changed/enabled during a GC, remove this.
var gcsema uint32 = 1
// stopTheWorldWithSema is the core implementation of stopTheWorld.
// The caller is responsible for acquiring worldsema and disabling
// preemption first and then should stopTheWorldWithSema on the system
// stack:
//
// semacquire(&worldsema, 0)
// m.preemptoff = "reason"
// systemstack(stopTheWorldWithSema)
//
// When finished, the caller must either call startTheWorld or undo
// these three operations separately:
//
// m.preemptoff = ""
// systemstack(startTheWorldWithSema)
// semrelease(&worldsema)
//
// It is allowed to acquire worldsema once and then execute multiple
// startTheWorldWithSema/stopTheWorldWithSema pairs.
// Other P's are able to execute between successive calls to
// startTheWorldWithSema and stopTheWorldWithSema.
// Holding worldsema causes any other goroutines invoking
// stopTheWorld to block.
func stopTheWorldWithSema() {
_g_ := getg()
// If we hold a lock, then we won't be able to stop another M
// that is blocked trying to acquire the lock.
if _g_.m.locks > 0 {
throw("stopTheWorld: holding locks")
}
lock(&sched.lock)
sched.stopwait = gomaxprocs
atomic.Store(&sched.gcwaiting, 1)
preemptall()
// stop current P
_g_.m.p.ptr().status = _Pgcstop // Pgcstop is only diagnostic.
sched.stopwait--
// try to retake all P's in Psyscall status
for _, p := range allp {
s := p.status
if s == _Psyscall && atomic.Cas(&p.status, s, _Pgcstop) {
if trace.enabled {
traceGoSysBlock(p)
traceProcStop(p)
}
p.syscalltick++
sched.stopwait--
}
}
// stop idle P's
for {
p := pidleget()
if p == nil {
break
}
p.status = _Pgcstop
sched.stopwait--
}
wait := sched.stopwait > 0
unlock(&sched.lock)
// wait for remaining P's to stop voluntarily
if wait {
for {
// wait for 100us, then try to re-preempt in case of any races
if notetsleep(&sched.stopnote, 100*1000) {
noteclear(&sched.stopnote)
break
}
preemptall()
}
}
// sanity checks
bad := ""
if sched.stopwait != 0 {
bad = "stopTheWorld: not stopped (stopwait != 0)"
} else {
for _, p := range allp {
if p.status != _Pgcstop {
bad = "stopTheWorld: not stopped (status != _Pgcstop)"
}
}
}
if atomic.Load(&freezing) != 0 {
// Some other thread is panicking. This can cause the
// sanity checks above to fail if the panic happens in
// the signal handler on a stopped thread. Either way,
// we should halt this thread.
lock(&deadlock)
lock(&deadlock)
}
if bad != "" {
throw(bad)
}
}
func startTheWorldWithSema(emitTraceEvent bool) int64 {
mp := acquirem() // disable preemption because it can be holding p in a local var
if netpollinited() {
list := netpoll(0) // non-blocking
injectglist(&list)
}
lock(&sched.lock)
procs := gomaxprocs
if newprocs != 0 {
procs = newprocs
newprocs = 0
}
p1 := procresize(procs)
sched.gcwaiting = 0
if sched.sysmonwait != 0 {
sched.sysmonwait = 0
notewakeup(&sched.sysmonnote)
}
unlock(&sched.lock)
for p1 != nil {
p := p1
p1 = p1.link.ptr()
if p.m != 0 {
mp := p.m.ptr()
p.m = 0
if mp.nextp != 0 {
throw("startTheWorld: inconsistent mp->nextp")
}
mp.nextp.set(p)
notewakeup(&mp.park)
} else {
// Start M to run P. Do not start another M below.
newm(nil, p, -1)
}
}
// Capture start-the-world time before doing clean-up tasks.
startTime := nanotime()
if emitTraceEvent {
traceGCSTWDone()
}
// Wakeup an additional proc in case we have excessive runnable goroutines
// in local queues or in the global queue. If we don't, the proc will park itself.
// If we have lots of excessive work, resetspinning will unpark additional procs as necessary.
wakep()
releasem(mp)
return startTime
}
// mstart is the entry-point for new Ms.
//
// This must not split the stack because we may not even have stack
// bounds set up yet.
//
// May run during STW (because it doesn't have a P yet), so write
// barriers are not allowed.
//
//go:nosplit
//go:nowritebarrierrec
func mstart() {
_g_ := getg()
osStack := _g_.stack.lo == 0
if osStack {
// Initialize stack bounds from system stack.
// Cgo may have left stack size in stack.hi.
// minit may update the stack bounds.
size := _g_.stack.hi
if size == 0 {
size = 8192 * sys.StackGuardMultiplier
}
_g_.stack.hi = uintptr(noescape(unsafe.Pointer(&size)))
_g_.stack.lo = _g_.stack.hi - size + 1024
}
// Initialize stack guard so that we can start calling regular
// Go code.
_g_.stackguard0 = _g_.stack.lo + _StackGuard
// This is the g0, so we can also call go:systemstack
// functions, which check stackguard1.
_g_.stackguard1 = _g_.stackguard0
mstart1()
// Exit this thread.
switch GOOS {
case "windows", "solaris", "illumos", "plan9", "darwin", "aix":
// Windows, Solaris, illumos, Darwin, AIX and Plan 9 always system-allocate
// the stack, but put it in _g_.stack before mstart,
// so the logic above hasn't set osStack yet.
osStack = true
}
mexit(osStack)
}
func mstart1() {
_g_ := getg()
if _g_ != _g_.m.g0 {
throw("bad runtime·mstart")
}
// Record the caller for use as the top of stack in mcall and
// for terminating the thread.
// We're never coming back to mstart1 after we call schedule,
// so other calls can reuse the current frame.
save(getcallerpc(), getcallersp())
asminit()
minit()
// Install signal handlers; after minit so that minit can
// prepare the thread to be able to handle the signals.
if _g_.m == &m0 {
mstartm0()
}
if fn := _g_.m.mstartfn; fn != nil {
fn()
}
if _g_.m != &m0 {
acquirep(_g_.m.nextp.ptr())
_g_.m.nextp = 0
}
schedule()
}
// mstartm0 implements part of mstart1 that only runs on the m0.
//
// Write barriers are allowed here because we know the GC can't be
// running yet, so they'll be no-ops.
//
//go:yeswritebarrierrec
func mstartm0() {
// Create an extra M for callbacks on threads not created by Go.
// An extra M is also needed on Windows for callbacks created by
// syscall.NewCallback. See issue #6751 for details.
if (iscgo || GOOS == "windows") && !cgoHasExtraM {
cgoHasExtraM = true
newextram()
}
initsig(false)
}
// mexit tears down and exits the current thread.
//
// Don't call this directly to exit the thread, since it must run at
// the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to
// unwind the stack to the point that exits the thread.
//
// It is entered with m.p != nil, so write barriers are allowed. It
// will release the P before exiting.
//
//go:yeswritebarrierrec
func mexit(osStack bool) {
g := getg()
m := g.m
if m == &m0 {
// This is the main thread. Just wedge it.
//
// On Linux, exiting the main thread puts the process
// into a non-waitable zombie state. On Plan 9,
// exiting the main thread unblocks wait even though
// other threads are still running. On Solaris we can
// neither exitThread nor return from mstart. Other
// bad things probably happen on other platforms.
//
// We could try to clean up this M more before wedging
// it, but that complicates signal handling.
handoffp(releasep())
lock(&sched.lock)
sched.nmfreed++
checkdead()
unlock(&sched.lock)
notesleep(&m.park)
throw("locked m0 woke up")
}
sigblock()
unminit()
// Free the gsignal stack.
if m.gsignal != nil {
stackfree(m.gsignal.stack)
// On some platforms, when calling into VDSO (e.g. nanotime)
// we store our g on the gsignal stack, if there is one.
// Now the stack is freed, unlink it from the m, so we
// won't write to it when calling VDSO code.
m.gsignal = nil
}
// Remove m from allm.
lock(&sched.lock)
for pprev := &allm; *pprev != nil; pprev = &(*pprev).alllink {
if *pprev == m {
*pprev = m.alllink
goto found
}
}
throw("m not found in allm")
found:
if !osStack {
// Delay reaping m until it's done with the stack.
//
// If this is using an OS stack, the OS will free it
// so there's no need for reaping.
atomic.Store(&m.freeWait, 1)
// Put m on the free list, though it will not be reaped until
// freeWait is 0. Note that the free list must not be linked
// through alllink because some functions walk allm without
// locking, so may be using alllink.
m.freelink = sched.freem
sched.freem = m
}
unlock(&sched.lock)
// Release the P.
handoffp(releasep())
// After this point we must not have write barriers.
// Invoke the deadlock detector. This must happen after
// handoffp because it may have started a new M to take our
// P's work.
lock(&sched.lock)
sched.nmfreed++
checkdead()
unlock(&sched.lock)
if osStack {
// Return from mstart and let the system thread
// library free the g0 stack and terminate the thread.
return
}
// mstart is the thread's entry point, so there's nothing to
// return to. Exit the thread directly. exitThread will clear
// m.freeWait when it's done with the stack and the m can be
// reaped.
exitThread(&m.freeWait)
}
// forEachP calls fn(p) for every P p when p reaches a GC safe point.
// If a P is currently executing code, this will bring the P to a GC
// safe point and execute fn on that P. If the P is not executing code
// (it is idle or in a syscall), this will call fn(p) directly while
// preventing the P from exiting its state. This does not ensure that
// fn will run on every CPU executing Go code, but it acts as a global
// memory barrier. GC uses this as a "ragged barrier."
//
// The caller must hold worldsema.
//
//go:systemstack
func forEachP(fn func(*p)) {
mp := acquirem()
_p_ := getg().m.p.ptr()
lock(&sched.lock)
if sched.safePointWait != 0 {
throw("forEachP: sched.safePointWait != 0")
}
sched.safePointWait = gomaxprocs - 1
sched.safePointFn = fn
// Ask all Ps to run the safe point function.
for _, p := range allp {
if p != _p_ {
atomic.Store(&p.runSafePointFn, 1)
}
}
preemptall()
// Any P entering _Pidle or _Psyscall from now on will observe
// p.runSafePointFn == 1 and will call runSafePointFn when
// changing its status to _Pidle/_Psyscall.
// Run safe point function for all idle Ps. sched.pidle will
// not change because we hold sched.lock.
for p := sched.pidle.ptr(); p != nil; p = p.link.ptr() {
if atomic.Cas(&p.runSafePointFn, 1, 0) {
fn(p)
sched.safePointWait--
}
}
wait := sched.safePointWait > 0
unlock(&sched.lock)
// Run fn for the current P.
fn(_p_)
// Force Ps currently in _Psyscall into _Pidle and hand them
// off to induce safe point function execution.
for _, p := range allp {
s := p.status
if s == _Psyscall && p.runSafePointFn == 1 && atomic.Cas(&p.status, s, _Pidle) {
if trace.enabled {
traceGoSysBlock(p)
traceProcStop(p)
}
p.syscalltick++
handoffp(p)
}
}
// Wait for remaining Ps to run fn.
if wait {
for {
// Wait for 100us, then try to re-preempt in
// case of any races.
//
// Requires system stack.
if notetsleep(&sched.safePointNote, 100*1000) {
noteclear(&sched.safePointNote)
break
}
preemptall()
}
}
if sched.safePointWait != 0 {
throw("forEachP: not done")
}
for _, p := range allp {
if p.runSafePointFn != 0 {
throw("forEachP: P did not run fn")
}
}
lock(&sched.lock)
sched.safePointFn = nil
unlock(&sched.lock)
releasem(mp)
}
// runSafePointFn runs the safe point function, if any, for this P.
// This should be called like
//
// if getg().m.p.runSafePointFn != 0 {
// runSafePointFn()
// }
//
// runSafePointFn must be checked on any transition in to _Pidle or
// _Psyscall to avoid a race where forEachP sees that the P is running
// just before the P goes into _Pidle/_Psyscall and neither forEachP
// nor the P run the safe-point function.
func runSafePointFn() {
p := getg().m.p.ptr()
// Resolve the race between forEachP running the safe-point
// function on this P's behalf and this P running the
// safe-point function directly.
if !atomic.Cas(&p.runSafePointFn, 1, 0) {
return
}
sched.safePointFn(p)
lock(&sched.lock)
sched.safePointWait--
if sched.safePointWait == 0 {
notewakeup(&sched.safePointNote)
}
unlock(&sched.lock)
}
// When running with cgo, we call _cgo_thread_start
// to start threads for us so that we can play nicely with
// foreign code.
var cgoThreadStart unsafe.Pointer
type cgothreadstart struct {
g guintptr
tls *uint64
fn unsafe.Pointer
}
// Allocate a new m unassociated with any thread.
// Can use p for allocation context if needed.
// fn is recorded as the new m's m.mstartfn.
// id is optional pre-allocated m ID. Omit by passing -1.
//
// This function is allowed to have write barriers even if the caller
// isn't because it borrows _p_.
//
//go:yeswritebarrierrec
func allocm(_p_ *p, fn func(), id int64) *m {
_g_ := getg()
acquirem() // disable GC because it can be called from sysmon
if _g_.m.p == 0 {
acquirep(_p_) // temporarily borrow p for mallocs in this function
}
// Release the free M list. We need to do this somewhere and
// this may free up a stack we can use.
if sched.freem != nil {
lock(&sched.lock)
var newList *m
for freem := sched.freem; freem != nil; {
if freem.freeWait != 0 {
next := freem.freelink
freem.freelink = newList
newList = freem
freem = next
continue
}
stackfree(freem.g0.stack)
freem = freem.freelink
}
sched.freem = newList
unlock(&sched.lock)
}
mp := new(m)
mp.mstartfn = fn
mcommoninit(mp, id)
// In case of cgo or Solaris or illumos or Darwin, pthread_create will make us a stack.
// Windows and Plan 9 will layout sched stack on OS stack.
if iscgo || GOOS == "solaris" || GOOS == "illumos" || GOOS == "windows" || GOOS == "plan9" || GOOS == "darwin" {
mp.g0 = malg(-1)
} else {
mp.g0 = malg(8192 * sys.StackGuardMultiplier)
}
mp.g0.m = mp
if _p_ == _g_.m.p.ptr() {
releasep()
}
releasem(_g_.m)
return mp
}
// needm is called when a cgo callback happens on a
// thread without an m (a thread not created by Go).
// In this case, needm is expected to find an m to use
// and return with m, g initialized correctly.
// Since m and g are not set now (likely nil, but see below)
// needm is limited in what routines it can call. In particular
// it can only call nosplit functions (textflag 7) and cannot
// do any scheduling that requires an m.
//
// In order to avoid needing heavy lifting here, we adopt
// the following strategy: there is a stack of available m's
// that can be stolen. Using compare-and-swap
// to pop from the stack has ABA races, so we simulate
// a lock by doing an exchange (via Casuintptr) to steal the stack
// head and replace the top pointer with MLOCKED (1).
// This serves as a simple spin lock that we can use even
// without an m. The thread that locks the stack in this way
// unlocks the stack by storing a valid stack head pointer.
//
// In order to make sure that there is always an m structure
// available to be stolen, we maintain the invariant that there
// is always one more than needed. At the beginning of the
// program (if cgo is in use) the list is seeded with a single m.
// If needm finds that it has taken the last m off the list, its job
// is - once it has installed its own m so that it can do things like
// allocate memory - to create a spare m and put it on the list.
//
// Each of these extra m's also has a g0 and a curg that are
// pressed into service as the scheduling stack and current
// goroutine for the duration of the cgo callback.
//
// When the callback is done with the m, it calls dropm to
// put the m back on the list.
//go:nosplit
func needm(x byte) {
if (iscgo || GOOS == "windows") && !cgoHasExtraM {
// Can happen if C/C++ code calls Go from a global ctor.
// Can also happen on Windows if a global ctor uses a
// callback created by syscall.NewCallback. See issue #6751
// for details.
//
// Can not throw, because scheduler is not initialized yet.
write(2, unsafe.Pointer(&earlycgocallback[0]), int32(len(earlycgocallback)))
exit(1)
}
// Lock extra list, take head, unlock popped list.
// nilokay=false is safe here because of the invariant above,
// that the extra list always contains or will soon contain
// at least one m.
mp := lockextra(false)
// Set needextram when we've just emptied the list,
// so that the eventual call into cgocallbackg will
// allocate a new m for the extra list. We delay the
// allocation until then so that it can be done
// after exitsyscall makes sure it is okay to be
// running at all (that is, there's no garbage collection
// running right now).
mp.needextram = mp.schedlink == 0
extraMCount--
unlockextra(mp.schedlink.ptr())
// Save and block signals before installing g.
// Once g is installed, any incoming signals will try to execute,
// but we won't have the sigaltstack settings and other data
// set up appropriately until the end of minit, which will
// unblock the signals. This is the same dance as when
// starting a new m to run Go code via newosproc.
msigsave(mp)
sigblock()
// Install g (= m->g0) and set the stack bounds
// to match the current stack. We don't actually know
// how big the stack is, like we don't know how big any
// scheduling stack is, but we assume there's at least 32 kB,
// which is more than enough for us.
setg(mp.g0)
_g_ := getg()
_g_.stack.hi = uintptr(noescape(unsafe.Pointer(&x))) + 1024
_g_.stack.lo = uintptr(noescape(unsafe.Pointer(&x))) - 32*1024
_g_.stackguard0 = _g_.stack.lo + _StackGuard
// Initialize this thread to use the m.
asminit()
minit()
// mp.curg is now a real goroutine.
casgstatus(mp.curg, _Gdead, _Gsyscall)
atomic.Xadd(&sched.ngsys, -1)
}
var earlycgocallback = []byte("fatal error: cgo callback before cgo call\n")
// newextram allocates m's and puts them on the extra list.
// It is called with a working local m, so that it can do things
// like call schedlock and allocate.
func newextram() {
c := atomic.Xchg(&extraMWaiters, 0)
if c > 0 {
for i := uint32(0); i < c; i++ {
oneNewExtraM()
}
} else {
// Make sure there is at least one extra M.
mp := lockextra(true)
unlockextra(mp)
if mp == nil {
oneNewExtraM()
}
}
}
// oneNewExtraM allocates an m and puts it on the extra list.
func oneNewExtraM() {
// Create extra goroutine locked to extra m.
// The goroutine is the context in which the cgo callback will run.
// The sched.pc will never be returned to, but setting it to
// goexit makes clear to the traceback routines where
// the goroutine stack ends.
mp := allocm(nil, nil, -1)
gp := malg(4096)
gp.sched.pc = funcPC(goexit) + sys.PCQuantum
gp.sched.sp = gp.stack.hi
gp.sched.sp -= 4 * sys.RegSize // extra space in case of reads slightly beyond frame
gp.sched.lr = 0
gp.sched.g = guintptr(unsafe.Pointer(gp))
gp.syscallpc = gp.sched.pc
gp.syscallsp = gp.sched.sp
gp.stktopsp = gp.sched.sp
// malg returns status as _Gidle. Change to _Gdead before
// adding to allg where GC can see it. We use _Gdead to hide
// this from tracebacks and stack scans since it isn't a
// "real" goroutine until needm grabs it.
casgstatus(gp, _Gidle, _Gdead)
gp.m = mp
mp.curg = gp
mp.lockedInt++
mp.lockedg.set(gp)
gp.lockedm.set(mp)
gp.goid = int64(atomic.Xadd64(&sched.goidgen, 1))
if raceenabled {
gp.racectx = racegostart(funcPC(newextram) + sys.PCQuantum)
}
// put on allg for garbage collector
allgadd(gp)
// gp is now on the allg list, but we don't want it to be
// counted by gcount. It would be more "proper" to increment
// sched.ngfree, but that requires locking. Incrementing ngsys
// has the same effect.
atomic.Xadd(&sched.ngsys, +1)
// Add m to the extra list.
mnext := lockextra(true)
mp.schedlink.set(mnext)
extraMCount++
unlockextra(mp)
}
// dropm is called when a cgo callback has called needm but is now
// done with the callback and returning back into the non-Go thread.
// It puts the current m back onto the extra list.
//
// The main expense here is the call to signalstack to release the
// m's signal stack, and then the call to needm on the next callback
// from this thread. It is tempting to try to save the m for next time,
// which would eliminate both these costs, but there might not be
// a next time: the current thread (which Go does not control) might exit.
// If we saved the m for that thread, there would be an m leak each time
// such a thread exited. Instead, we acquire and release an m on each
// call. These should typically not be scheduling operations, just a few
// atomics, so the cost should be small.
//
// TODO(rsc): An alternative would be to allocate a dummy pthread per-thread
// variable using pthread_key_create. Unlike the pthread keys we already use
// on OS X, this dummy key would never be read by Go code. It would exist
// only so that we could register at thread-exit-time destructor.
// That destructor would put the m back onto the extra list.
// This is purely a performance optimization. The current version,
// in which dropm happens on each cgo call, is still correct too.
// We may have to keep the current version on systems with cgo
// but without pthreads, like Windows.
func dropm() {
// Clear m and g, and return m to the extra list.
// After the call to setg we can only call nosplit functions
// with no pointer manipulation.
mp := getg().m
// Return mp.curg to dead state.
casgstatus(mp.curg, _Gsyscall, _Gdead)
mp.curg.preemptStop = false
atomic.Xadd(&sched.ngsys, +1)
// Block signals before unminit.
// Unminit unregisters the signal handling stack (but needs g on some systems).
// Setg(nil) clears g, which is the signal handler's cue not to run Go handlers.
// It's important not to try to handle a signal between those two steps.
sigmask := mp.sigmask
sigblock()
unminit()
mnext := lockextra(true)
extraMCount++
mp.schedlink.set(mnext)
setg(nil)
// Commit the release of mp.
unlockextra(mp)
msigrestore(sigmask)
}
// A helper function for EnsureDropM.
func getm() uintptr {
return uintptr(unsafe.Pointer(getg().m))
}
var extram uintptr
var extraMCount uint32 // Protected by lockextra
var extraMWaiters uint32
// lockextra locks the extra list and returns the list head.
// The caller must unlock the list by storing a new list head
// to extram. If nilokay is true, then lockextra will
// return a nil list head if that's what it finds. If nilokay is false,
// lockextra will keep waiting until the list head is no longer nil.
//go:nosplit
func lockextra(nilokay bool) *m {
const locked = 1
incr := false
for {
old := atomic.Loaduintptr(&extram)
if old == locked {
osyield()
continue
}
if old == 0 && !nilokay {
if !incr {
// Add 1 to the number of threads
// waiting for an M.
// This is cleared by newextram.
atomic.Xadd(&extraMWaiters, 1)
incr = true
}
usleep(1)
continue
}
if atomic.Casuintptr(&extram, old, locked) {
return (*m)(unsafe.Pointer(old))
}
osyield()
continue
}
}
//go:nosplit
func unlockextra(mp *m) {
atomic.Storeuintptr(&extram, uintptr(unsafe.Pointer(mp)))
}
// execLock serializes exec and clone to avoid bugs or unspecified behaviour
// around exec'ing while creating/destroying threads. See issue #19546.
var execLock rwmutex
// newmHandoff contains a list of m structures that need new OS threads.
// This is used by newm in situations where newm itself can't safely
// start an OS thread.
var newmHandoff struct {
lock mutex
// newm points to a list of M structures that need new OS
// threads. The list is linked through m.schedlink.
newm muintptr
// waiting indicates that wake needs to be notified when an m
// is put on the list.
waiting bool
wake note
// haveTemplateThread indicates that the templateThread has
// been started. This is not protected by lock. Use cas to set
// to 1.
haveTemplateThread uint32
}
// Create a new m. It will start off with a call to fn, or else the scheduler.
// fn needs to be static and not a heap allocated closure.
// May run with m.p==nil, so write barriers are not allowed.
//
// id is optional pre-allocated m ID. Omit by passing -1.
//go:nowritebarrierrec
func newm(fn func(), _p_ *p, id int64) {
mp := allocm(_p_, fn, id)
mp.nextp.set(_p_)
mp.sigmask = initSigmask
if gp := getg(); gp != nil && gp.m != nil && (gp.m.lockedExt != 0 || gp.m.incgo) && GOOS != "plan9" {
// We're on a locked M or a thread that may have been
// started by C. The kernel state of this thread may
// be strange (the user may have locked it for that
// purpose). We don't want to clone that into another
// thread. Instead, ask a known-good thread to create
// the thread for us.
//
// This is disabled on Plan 9. See golang.org/issue/22227.
//
// TODO: This may be unnecessary on Windows, which
// doesn't model thread creation off fork.
lock(&newmHandoff.lock)
if newmHandoff.haveTemplateThread == 0 {
throw("on a locked thread with no template thread")
}
mp.schedlink = newmHandoff.newm
newmHandoff.newm.set(mp)
if newmHandoff.waiting {
newmHandoff.waiting = false
notewakeup(&newmHandoff.wake)
}
unlock(&newmHandoff.lock)
return
}
newm1(mp)
}
func newm1(mp *m) {
if iscgo {
var ts cgothreadstart
if _cgo_thread_start == nil {
throw("_cgo_thread_start missing")
}
ts.g.set(mp.g0)
ts.tls = (*uint64)(unsafe.Pointer(&mp.tls[0]))
ts.fn = unsafe.Pointer(funcPC(mstart))
if msanenabled {
msanwrite(unsafe.Pointer(&ts), unsafe.Sizeof(ts))
}
execLock.rlock() // Prevent process clone.
asmcgocall(_cgo_thread_start, unsafe.Pointer(&ts))
execLock.runlock()
return
}
execLock.rlock() // Prevent process clone.
newosproc(mp)
execLock.runlock()
}
// startTemplateThread starts the template thread if it is not already
// running.
//
// The calling thread must itself be in a known-good state.
func startTemplateThread() {
if GOARCH == "wasm" { // no threads on wasm yet
return
}
// Disable preemption to guarantee that the template thread will be
// created before a park once haveTemplateThread is set.
mp := acquirem()
if !atomic.Cas(&newmHandoff.haveTemplateThread, 0, 1) {
releasem(mp)
return
}
newm(templateThread, nil, -1)
releasem(mp)
}
// templateThread is a thread in a known-good state that exists solely
// to start new threads in known-good states when the calling thread
// may not be in a good state.
//
// Many programs never need this, so templateThread is started lazily
// when we first enter a state that might lead to running on a thread
// in an unknown state.
//
// templateThread runs on an M without a P, so it must not have write
// barriers.
//
//go:nowritebarrierrec
func templateThread() {
lock(&sched.lock)
sched.nmsys++
checkdead()
unlock(&sched.lock)
for {
lock(&newmHandoff.lock)
for newmHandoff.newm != 0 {
newm := newmHandoff.newm.ptr()
newmHandoff.newm = 0
unlock(&newmHandoff.lock)
for newm != nil {
next := newm.schedlink.ptr()
newm.schedlink = 0
newm1(newm)
newm = next
}
lock(&newmHandoff.lock)
}
newmHandoff.waiting = true
noteclear(&newmHandoff.wake)
unlock(&newmHandoff.lock)
notesleep(&newmHandoff.wake)
}
}
// Stops execution of the current m until new work is available.
// Returns with acquired P.
func stopm() {
_g_ := getg()
if _g_.m.locks != 0 {
throw("stopm holding locks")
}
if _g_.m.p != 0 {
throw("stopm holding p")
}
if _g_.m.spinning {
throw("stopm spinning")
}
lock(&sched.lock)
mput(_g_.m)
unlock(&sched.lock)
notesleep(&_g_.m.park)
noteclear(&_g_.m.park)
acquirep(_g_.m.nextp.ptr())
_g_.m.nextp = 0
}
func mspinning() {
// startm's caller incremented nmspinning. Set the new M's spinning.
getg().m.spinning = true
}
// Schedules some M to run the p (creates an M if necessary).
// If p==nil, tries to get an idle P, if no idle P's does nothing.
// May run with m.p==nil, so write barriers are not allowed.
// If spinning is set, the caller has incremented nmspinning and startm will
// either decrement nmspinning or set m.spinning in the newly started M.
//go:nowritebarrierrec
func startm(_p_ *p, spinning bool) {
lock(&sched.lock)
if _p_ == nil {
_p_ = pidleget()
if _p_ == nil {
unlock(&sched.lock)
if spinning {
// The caller incremented nmspinning, but there are no idle Ps,
// so it's okay to just undo the increment and give up.
if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
throw("startm: negative nmspinning")
}
}
return
}
}
mp := mget()
if mp == nil {
// No M is available, we must drop sched.lock and call newm.
// However, we already own a P to assign to the M.
//
// Once sched.lock is released, another G (e.g., in a syscall),
// could find no idle P while checkdead finds a runnable G but
// no running M's because this new M hasn't started yet, thus
// throwing in an apparent deadlock.
//
// Avoid this situation by pre-allocating the ID for the new M,
// thus marking it as 'running' before we drop sched.lock. This
// new M will eventually run the scheduler to execute any
// queued G's.
id := mReserveID()
unlock(&sched.lock)
var fn func()
if spinning {
// The caller incremented nmspinning, so set m.spinning in the new M.
fn = mspinning
}
newm(fn, _p_, id)
return
}
unlock(&sched.lock)
if mp.spinning {
throw("startm: m is spinning")
}
if mp.nextp != 0 {
throw("startm: m has p")
}
if spinning && !runqempty(_p_) {
throw("startm: p has runnable gs")
}
// The caller incremented nmspinning, so set m.spinning in the new M.
mp.spinning = spinning
mp.nextp.set(_p_)
notewakeup(&mp.park)
}
// Hands off P from syscall or locked M.
// Always runs without a P, so write barriers are not allowed.
//go:nowritebarrierrec
func handoffp(_p_ *p) {
// handoffp must start an M in any situation where
// findrunnable would return a G to run on _p_.
// if it has local work, start it straight away
if !runqempty(_p_) || sched.runqsize != 0 {
startm(_p_, false)
return
}
// if it has GC work, start it straight away
if gcBlackenEnabled != 0 && gcMarkWorkAvailable(_p_) {
startm(_p_, false)
return
}
// no local work, check that there are no spinning/idle M's,
// otherwise our help is not required
if atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) == 0 && atomic.Cas(&sched.nmspinning, 0, 1) { // TODO: fast atomic
startm(_p_, true)
return
}
lock(&sched.lock)
if sched.gcwaiting != 0 {
_p_.status = _Pgcstop
sched.stopwait--
if sched.stopwait == 0 {
notewakeup(&sched.stopnote)
}
unlock(&sched.lock)
return
}
if _p_.runSafePointFn != 0 && atomic.Cas(&_p_.runSafePointFn, 1, 0) {
sched.safePointFn(_p_)
sched.safePointWait--
if sched.safePointWait == 0 {
notewakeup(&sched.safePointNote)
}
}
if sched.runqsize != 0 {
unlock(&sched.lock)
startm(_p_, false)
return
}
// If this is the last running P and nobody is polling network,
// need to wakeup another M to poll network.
if sched.npidle == uint32(gomaxprocs-1) && atomic.Load64(&sched.lastpoll) != 0 {
unlock(&sched.lock)
startm(_p_, false)
return
}
if when := nobarrierWakeTime(_p_); when != 0 {
wakeNetPoller(when)
}
pidleput(_p_)
unlock(&sched.lock)
}
// Tries to add one more P to execute G's.
// Called when a G is made runnable (newproc, ready).
func wakep() {
if atomic.Load(&sched.npidle) == 0 {
return
}
// be conservative about spinning threads
if atomic.Load(&sched.nmspinning) != 0 || !atomic.Cas(&sched.nmspinning, 0, 1) {
return
}
startm(nil, true)
}
// Stops execution of the current m that is locked to a g until the g is runnable again.
// Returns with acquired P.
func stoplockedm() {
_g_ := getg()
if _g_.m.lockedg == 0 || _g_.m.lockedg.ptr().lockedm.ptr() != _g_.m {
throw("stoplockedm: inconsistent locking")
}
if _g_.m.p != 0 {
// Schedule another M to run this p.
_p_ := releasep()
handoffp(_p_)
}
incidlelocked(1)
// Wait until another thread schedules lockedg again.
notesleep(&_g_.m.park)
noteclear(&_g_.m.park)
status := readgstatus(_g_.m.lockedg.ptr())
if status&^_Gscan != _Grunnable {
print("runtime:stoplockedm: g is not Grunnable or Gscanrunnable\n")
dumpgstatus(_g_)
throw("stoplockedm: not runnable")
}
acquirep(_g_.m.nextp.ptr())
_g_.m.nextp = 0
}
// Schedules the locked m to run the locked gp.
// May run during STW, so write barriers are not allowed.
//go:nowritebarrierrec
func startlockedm(gp *g) {
_g_ := getg()
mp := gp.lockedm.ptr()
if mp == _g_.m {
throw("startlockedm: locked to me")
}
if mp.nextp != 0 {
throw("startlockedm: m has p")
}
// directly handoff current P to the locked m
incidlelocked(-1)
_p_ := releasep()
mp.nextp.set(_p_)
notewakeup(&mp.park)
stopm()
}
// Stops the current m for stopTheWorld.
// Returns when the world is restarted.
func gcstopm() {
_g_ := getg()
if sched.gcwaiting == 0 {
throw("gcstopm: not waiting for gc")
}
if _g_.m.spinning {
_g_.m.spinning = false
// OK to just drop nmspinning here,
// startTheWorld will unpark threads as necessary.
if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
throw("gcstopm: negative nmspinning")
}
}
_p_ := releasep()
lock(&sched.lock)
_p_.status = _Pgcstop
sched.stopwait--
if sched.stopwait == 0 {
notewakeup(&sched.stopnote)
}
unlock(&sched.lock)
stopm()
}
// Schedules gp to run on the current M.
// If inheritTime is true, gp inherits the remaining time in the
// current time slice. Otherwise, it starts a new time slice.
// Never returns.
//
// Write barriers are allowed because this is called immediately after
// acquiring a P in several places.
//
//go:yeswritebarrierrec
func execute(gp *g, inheritTime bool) {
_g_ := getg()
// Assign gp.m before entering _Grunning so running Gs have an
// M.
_g_.m.curg = gp
gp.m = _g_.m
casgstatus(gp, _Grunnable, _Grunning)
gp.waitsince = 0
gp.preempt = false
gp.stackguard0 = gp.stack.lo + _StackGuard
if !inheritTime {
_g_.m.p.ptr().schedtick++
}
// Check whether the profiler needs to be turned on or off.
hz := sched.profilehz
if _g_.m.profilehz != hz {
setThreadCPUProfiler(hz)
}
if trace.enabled {
// GoSysExit has to happen when we have a P, but before GoStart.
// So we emit it here.
if gp.syscallsp != 0 && gp.sysblocktraced {
traceGoSysExit(gp.sysexitticks)
}
traceGoStart()
}
gogo(&gp.sched)
}
// Finds a runnable goroutine to execute.
// Tries to steal from other P's, get g from local or global queue, poll network.
func findrunnable() (gp *g, inheritTime bool) {
_g_ := getg()
// The conditions here and in handoffp must agree: if
// findrunnable would return a G to run, handoffp must start
// an M.
top:
_p_ := _g_.m.p.ptr()
if sched.gcwaiting != 0 {
gcstopm()
goto top
}
if _p_.runSafePointFn != 0 {
runSafePointFn()
}
now, pollUntil, _ := checkTimers(_p_, 0)
if fingwait && fingwake {
if gp := wakefing(); gp != nil {
ready(gp, 0, true)
}
}
if *cgo_yield != nil {
asmcgocall(*cgo_yield, nil)
}
// local runq
if gp, inheritTime := runqget(_p_); gp != nil {
return gp, inheritTime
}
// global runq
if sched.runqsize != 0 {
lock(&sched.lock)
gp := globrunqget(_p_, 0)
unlock(&sched.lock)
if gp != nil {
return gp, false
}
}
// Poll network.
// This netpoll is only an optimization before we resort to stealing.
// We can safely skip it if there are no waiters or a thread is blocked
// in netpoll already. If there is any kind of logical race with that
// blocked thread (e.g. it has already returned from netpoll, but does
// not set lastpoll yet), this thread will do blocking netpoll below
// anyway.
if netpollinited() && atomic.Load(&netpollWaiters) > 0 && atomic.Load64(&sched.lastpoll) != 0 {
if list := netpoll(0); !list.empty() { // non-blocking
gp := list.pop()
injectglist(&list)
casgstatus(gp, _Gwaiting, _Grunnable)
if trace.enabled {
traceGoUnpark(gp, 0)
}
return gp, false
}
}
// Steal work from other P's.
procs := uint32(gomaxprocs)
ranTimer := false
// If number of spinning M's >= number of busy P's, block.
// This is necessary to prevent excessive CPU consumption
// when GOMAXPROCS>>1 but the program parallelism is low.
if !_g_.m.spinning && 2*atomic.Load(&sched.nmspinning) >= procs-atomic.Load(&sched.npidle) {
goto stop
}
if !_g_.m.spinning {
_g_.m.spinning = true
atomic.Xadd(&sched.nmspinning, 1)
}
for i := 0; i < 4; i++ {
for enum := stealOrder.start(fastrand()); !enum.done(); enum.next() {
if sched.gcwaiting != 0 {
goto top
}
stealRunNextG := i > 2 // first look for ready queues with more than 1 g
p2 := allp[enum.position()]
if _p_ == p2 {
continue
}
if gp := runqsteal(_p_, p2, stealRunNextG); gp != nil {
return gp, false
}
// Consider stealing timers from p2.
// This call to checkTimers is the only place where
// we hold a lock on a different P's timers.
// Lock contention can be a problem here, so
// initially avoid grabbing the lock if p2 is running
// and is not marked for preemption. If p2 is running
// and not being preempted we assume it will handle its
// own timers.
// If we're still looking for work after checking all
// the P's, then go ahead and steal from an active P.
if i > 2 || (i > 1 && shouldStealTimers(p2)) {
tnow, w, ran := checkTimers(p2, now)
now = tnow
if w != 0 && (pollUntil == 0 || w < pollUntil) {
pollUntil = w
}
if ran {
// Running the timers may have
// made an arbitrary number of G's
// ready and added them to this P's
// local run queue. That invalidates
// the assumption of runqsteal
// that is always has room to add
// stolen G's. So check now if there
// is a local G to run.
if gp, inheritTime := runqget(_p_); gp != nil {
return gp, inheritTime
}
ranTimer = true
}
}
}
}
if ranTimer {
// Running a timer may have made some goroutine ready.
goto top
}
stop:
// We have nothing to do. If we're in the GC mark phase, can
// safely scan and blacken objects, and have work to do, run
// idle-time marking rather than give up the P.
if gcBlackenEnabled != 0 && _p_.gcBgMarkWorker != 0 && gcMarkWorkAvailable(_p_) {
_p_.gcMarkWorkerMode = gcMarkWorkerIdleMode
gp := _p_.gcBgMarkWorker.ptr()
casgstatus(gp, _Gwaiting, _Grunnable)
if trace.enabled {
traceGoUnpark(gp, 0)
}
return gp, false
}
delta := int64(-1)
if pollUntil != 0 {
// checkTimers ensures that polluntil > now.
delta = pollUntil - now
}
// wasm only:
// If a callback returned and no other goroutine is awake,
// then wake event handler goroutine which pauses execution
// until a callback was triggered.
gp, otherReady := beforeIdle(delta)
if gp != nil {
casgstatus(gp, _Gwaiting, _Grunnable)
if trace.enabled {
traceGoUnpark(gp, 0)
}
return gp, false
}
if otherReady {
goto top
}
// Before we drop our P, make a snapshot of the allp slice,
// which can change underfoot once we no longer block
// safe-points. We don't need to snapshot the contents because
// everything up to cap(allp) is immutable.
allpSnapshot := allp
// return P and block
lock(&sched.lock)
if sched.gcwaiting != 0 || _p_.runSafePointFn != 0 {
unlock(&sched.lock)
goto top
}
if sched.runqsize != 0 {
gp := globrunqget(_p_, 0)
unlock(&sched.lock)
return gp, false
}
if releasep() != _p_ {
throw("findrunnable: wrong p")
}
pidleput(_p_)
unlock(&sched.lock)
// Delicate dance: thread transitions from spinning to non-spinning state,
// potentially concurrently with submission of new goroutines. We must
// drop nmspinning first and then check all per-P queues again (with
// #StoreLoad memory barrier in between). If we do it the other way around,
// another thread can submit a goroutine after we've checked all run queues
// but before we drop nmspinning; as the result nobody will unpark a thread
// to run the goroutine.
// If we discover new work below, we need to restore m.spinning as a signal
// for resetspinning to unpark a new worker thread (because there can be more
// than one starving goroutine). However, if after discovering new work
// we also observe no idle Ps, it is OK to just park the current thread:
// the system is fully loaded so no spinning threads are required.
// Also see "Worker thread parking/unparking" comment at the top of the file.
wasSpinning := _g_.m.spinning
if _g_.m.spinning {
_g_.m.spinning = false
if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
throw("findrunnable: negative nmspinning")
}
}
// check all runqueues once again
for _, _p_ := range allpSnapshot {
if !runqempty(_p_) {
lock(&sched.lock)
_p_ = pidleget()
unlock(&sched.lock)
if _p_ != nil {
acquirep(_p_)
if wasSpinning {
_g_.m.spinning = true
atomic.Xadd(&sched.nmspinning, 1)
}
goto top
}
break
}
}
// Check for idle-priority GC work again.
if gcBlackenEnabled != 0 && gcMarkWorkAvailable(nil) {
lock(&sched.lock)
_p_ = pidleget()
if _p_ != nil && _p_.gcBgMarkWorker == 0 {
pidleput(_p_)
_p_ = nil
}
unlock(&sched.lock)
if _p_ != nil {
acquirep(_p_)
if wasSpinning {
_g_.m.spinning = true
atomic.Xadd(&sched.nmspinning, 1)
}
// Go back to idle GC check.
goto stop
}
}
// poll network
if netpollinited() && (atomic.Load(&netpollWaiters) > 0 || pollUntil != 0) && atomic.Xchg64(&sched.lastpoll, 0) != 0 {
atomic.Store64(&sched.pollUntil, uint64(pollUntil))
if _g_.m.p != 0 {
throw("findrunnable: netpoll with p")
}
if _g_.m.spinning {
throw("findrunnable: netpoll with spinning")
}
if faketime != 0 {
// When using fake time, just poll.
delta = 0
}
list := netpoll(delta) // block until new work is available
atomic.Store64(&sched.pollUntil, 0)
atomic.Store64(&sched.lastpoll, uint64(nanotime()))
if faketime != 0 && list.empty() {
// Using fake time and nothing is ready; stop M.
// When all M's stop, checkdead will call timejump.
stopm()
goto top
}
lock(&sched.lock)
_p_ = pidleget()
unlock(&sched.lock)
if _p_ == nil {
injectglist(&list)
} else {
acquirep(_p_)
if !list.empty() {
gp := list.pop()
injectglist(&list)
casgstatus(gp, _Gwaiting, _Grunnable)
if trace.enabled {
traceGoUnpark(gp, 0)
}
return gp, false
}
if wasSpinning {
_g_.m.spinning = true
atomic.Xadd(&sched.nmspinning, 1)
}
goto top
}
} else if pollUntil != 0 && netpollinited() {
pollerPollUntil := int64(atomic.Load64(&sched.pollUntil))
if pollerPollUntil == 0 || pollerPollUntil > pollUntil {
netpollBreak()
}
}
stopm()
goto top
}
// pollWork reports whether there is non-background work this P could
// be doing. This is a fairly lightweight check to be used for
// background work loops, like idle GC. It checks a subset of the
// conditions checked by the actual scheduler.
func pollWork() bool {
if sched.runqsize != 0 {
return true
}
p := getg().m.p.ptr()
if !runqempty(p) {
return true
}
if netpollinited() && atomic.Load(&netpollWaiters) > 0 && sched.lastpoll != 0 {
if list := netpoll(0); !list.empty() {
injectglist(&list)
return true
}
}
return false
}
// wakeNetPoller wakes up the thread sleeping in the network poller,
// if there is one, and if it isn't going to wake up anyhow before
// the when argument.
func wakeNetPoller(when int64) {
if atomic.Load64(&sched.lastpoll) == 0 {
// In findrunnable we ensure that when polling the pollUntil
// field is either zero or the time to which the current
// poll is expected to run. This can have a spurious wakeup
// but should never miss a wakeup.
pollerPollUntil := int64(atomic.Load64(&sched.pollUntil))
if pollerPollUntil == 0 || pollerPollUntil > when {
netpollBreak()
}
}
}
func resetspinning() {
_g_ := getg()
if !_g_.m.spinning {
throw("resetspinning: not a spinning m")
}
_g_.m.spinning = false
nmspinning := atomic.Xadd(&sched.nmspinning, -1)
if int32(nmspinning) < 0 {
throw("findrunnable: negative nmspinning")
}
// M wakeup policy is deliberately somewhat conservative, so check if we
// need to wakeup another P here. See "Worker thread parking/unparking"
// comment at the top of the file for details.
wakep()
}
// injectglist adds each runnable G on the list to some run queue,
// and clears glist. If there is no current P, they are added to the
// global queue, and up to npidle M's are started to run them.
// Otherwise, for each idle P, this adds a G to the global queue
// and starts an M. Any remaining G's are added to the current P's
// local run queue.
// This may temporarily acquire the scheduler lock.
// Can run concurrently with GC.
func injectglist(glist *gList) {
if glist.empty() {
return
}
if trace.enabled {
for gp := glist.head.ptr(); gp != nil; gp = gp.schedlink.ptr() {
traceGoUnpark(gp, 0)
}
}
// Mark all the goroutines as runnable before we put them
// on the run queues.
head := glist.head.ptr()
var tail *g
qsize := 0
for gp := head; gp != nil; gp = gp.schedlink.ptr() {
tail = gp
qsize++
casgstatus(gp, _Gwaiting, _Grunnable)
}
// Turn the gList into a gQueue.
var q gQueue
q.head.set(head)
q.tail.set(tail)
*glist = gList{}
startIdle := func(n int) {
for ; n != 0 && sched.npidle != 0; n-- {
startm(nil, false)
}
}
pp := getg().m.p.ptr()
if pp == nil {
lock(&sched.lock)
globrunqputbatch(&q, int32(qsize))
unlock(&sched.lock)
startIdle(qsize)
return
}
npidle := int(atomic.Load(&sched.npidle))
var globq gQueue
var n int
for n = 0; n < npidle && !q.empty(); n++ {
g := q.pop()
globq.pushBack(g)
}
if n > 0 {
lock(&sched.lock)
globrunqputbatch(&globq, int32(n))
unlock(&sched.lock)
startIdle(n)
qsize -= n
}
if !q.empty() {
runqputbatch(pp, &q, qsize)
}
}
// One round of scheduler: find a runnable goroutine and execute it.
// Never returns.
func schedule() {
_g_ := getg()
if _g_.m.locks != 0 {
throw("schedule: holding locks")
}
if _g_.m.lockedg != 0 {
stoplockedm()
execute(_g_.m.lockedg.ptr(), false) // Never returns.
}
// We should not schedule away from a g that is executing a cgo call,
// since the cgo call is using the m's g0 stack.
if _g_.m.incgo {
throw("schedule: in cgo")
}
top:
pp := _g_.m.p.ptr()
pp.preempt = false
if sched.gcwaiting != 0 {
gcstopm()
goto top
}
if pp.runSafePointFn != 0 {
runSafePointFn()
}
// Sanity check: if we are spinning, the run queue should be empty.
// Check this before calling checkTimers, as that might call
// goready to put a ready goroutine on the local run queue.
if _g_.m.spinning && (pp.runnext != 0 || pp.runqhead != pp.runqtail) {
throw("schedule: spinning with local work")
}
checkTimers(pp, 0)
var gp *g
var inheritTime bool
// Normal goroutines will check for need to wakeP in ready,
// but GCworkers and tracereaders will not, so the check must
// be done here instead.
tryWakeP := false
if trace.enabled || trace.shutdown {
gp = traceReader()
if gp != nil {
casgstatus(gp, _Gwaiting, _Grunnable)
traceGoUnpark(gp, 0)
tryWakeP = true
}
}
if gp == nil && gcBlackenEnabled != 0 {
gp = gcController.findRunnableGCWorker(_g_.m.p.ptr())
tryWakeP = tryWakeP || gp != nil
}
if gp == nil {
// Check the global runnable queue once in a while to ensure fairness.
// Otherwise two goroutines can completely occupy the local runqueue
// by constantly respawning each other.
if _g_.m.p.ptr().schedtick%61 == 0 && sched.runqsize > 0 {
lock(&sched.lock)
gp = globrunqget(_g_.m.p.ptr(), 1)
unlock(&sched.lock)
}
}
if gp == nil {
gp, inheritTime = runqget(_g_.m.p.ptr())
// We can see gp != nil here even if the M is spinning,
// if checkTimers added a local goroutine via goready.
}
if gp == nil {
gp, inheritTime = findrunnable() // blocks until work is available
}
// This thread is going to run a goroutine and is not spinning anymore,
// so if it was marked as spinning we need to reset it now and potentially
// start a new spinning M.
if _g_.m.spinning {
resetspinning()
}
if sched.disable.user && !schedEnabled(gp) {
// Scheduling of this goroutine is disabled. Put it on
// the list of pending runnable goroutines for when we
// re-enable user scheduling and look again.
lock(&sched.lock)
if schedEnabled(gp) {
// Something re-enabled scheduling while we
// were acquiring the lock.
unlock(&sched.lock)
} else {
sched.disable.runnable.pushBack(gp)
sched.disable.n++
unlock(&sched.lock)
goto top
}
}
// If about to schedule a not-normal goroutine (a GCworker or tracereader),
// wake a P if there is one.
if tryWakeP {
wakep()
}
if gp.lockedm != 0 {
// Hands off own p to the locked m,
// then blocks waiting for a new p.
startlockedm(gp)
goto top
}
execute(gp, inheritTime)
}
// dropg removes the association between m and the current goroutine m->curg (gp for short).
// Typically a caller sets gp's status away from Grunning and then
// immediately calls dropg to finish the job. The caller is also responsible
// for arranging that gp will be restarted using ready at an
// appropriate time. After calling dropg and arranging for gp to be
// readied later, the caller can do other work but eventually should
// call schedule to restart the scheduling of goroutines on this m.
func dropg() {
_g_ := getg()
setMNoWB(&_g_.m.curg.m, nil)
setGNoWB(&_g_.m.curg, nil)
}
// checkTimers runs any timers for the P that are ready.
// If now is not 0 it is the current time.
// It returns the current time or 0 if it is not known,
// and the time when the next timer should run or 0 if there is no next timer,
// and reports whether it ran any timers.
// If the time when the next timer should run is not 0,
// it is always larger than the returned time.
// We pass now in and out to avoid extra calls of nanotime.
//go:yeswritebarrierrec
func checkTimers(pp *p, now int64) (rnow, pollUntil int64, ran bool) {
// If there are no timers to adjust, and the first timer on
// the heap is not yet ready to run, then there is nothing to do.
if atomic.Load(&pp.adjustTimers) == 0 {
next := int64(atomic.Load64(&pp.timer0When))
if next == 0 {
return now, 0, false
}
if now == 0 {
now = nanotime()
}
if now < next {
// Next timer is not ready to run.
// But keep going if we would clear deleted timers.
// This corresponds to the condition below where
// we decide whether to call clearDeletedTimers.
if pp != getg().m.p.ptr() || int(atomic.Load(&pp.deletedTimers)) <= int(atomic.Load(&pp.numTimers)/4) {
return now, next, false
}
}
}
lock(&pp.timersLock)
adjusttimers(pp)
rnow = now
if len(pp.timers) > 0 {
if rnow == 0 {
rnow = nanotime()
}
for len(pp.timers) > 0 {
// Note that runtimer may temporarily unlock
// pp.timersLock.
if tw := runtimer(pp, rnow); tw != 0 {
if tw > 0 {
pollUntil = tw
}
break
}
ran = true
}
}
// If this is the local P, and there are a lot of deleted timers,
// clear them out. We only do this for the local P to reduce
// lock contention on timersLock.
if pp == getg().m.p.ptr() && int(atomic.Load(&pp.deletedTimers)) > len(pp.timers)/4 {
clearDeletedTimers(pp)
}
unlock(&pp.timersLock)
return rnow, pollUntil, ran
}
// shouldStealTimers reports whether we should try stealing the timers from p2.
// We don't steal timers from a running P that is not marked for preemption,
// on the assumption that it will run its own timers. This reduces
// contention on the timers lock.
func shouldStealTimers(p2 *p) bool {
if p2.status != _Prunning {
return true
}
mp := p2.m.ptr()
if mp == nil || mp.locks > 0 {
return false
}
gp := mp.curg
if gp == nil || gp.atomicstatus != _Grunning || !gp.preempt {
return false
}
return true
}
func parkunlock_c(gp *g, lock unsafe.Pointer) bool {
unlock((*mutex)(lock))
return true
}
// park continuation on g0.
func park_m(gp *g) {
_g_ := getg()
if trace.enabled {
traceGoPark(_g_.m.waittraceev, _g_.m.waittraceskip)
}
casgstatus(gp, _Grunning, _Gwaiting)
dropg()
if fn := _g_.m.waitunlockf; fn != nil {
ok := fn(gp, _g_.m.waitlock)
_g_.m.waitunlockf = nil
_g_.m.waitlock = nil
if !ok {
if trace.enabled {
traceGoUnpark(gp, 2)
}
casgstatus(gp, _Gwaiting, _Grunnable)
execute(gp, true) // Schedule it back, never returns.
}
}
schedule()
}
func goschedImpl(gp *g) {
status := readgstatus(gp)
if status&^_Gscan != _Grunning {
dumpgstatus(gp)
throw("bad g status")
}
casgstatus(gp, _Grunning, _Grunnable)
dropg()
lock(&sched.lock)
globrunqput(gp)
unlock(&sched.lock)
schedule()
}
// Gosched continuation on g0.
func gosched_m(gp *g) {
if trace.enabled {
traceGoSched()
}
goschedImpl(gp)
}
// goschedguarded is a forbidden-states-avoided version of gosched_m
func goschedguarded_m(gp *g) {
if !canPreemptM(gp.m) {
gogo(&gp.sched) // never return
}
if trace.enabled {
traceGoSched()
}
goschedImpl(gp)
}
func gopreempt_m(gp *g) {
if trace.enabled {
traceGoPreempt()
}
goschedImpl(gp)
}
// preemptPark parks gp and puts it in _Gpreempted.
//
//go:systemstack
func preemptPark(gp *g) {
if trace.enabled {
traceGoPark(traceEvGoBlock, 0)
}
status := readgstatus(gp)
if status&^_Gscan != _Grunning {
dumpgstatus(gp)
throw("bad g status")
}
gp.waitreason = waitReasonPreempted
// Transition from _Grunning to _Gscan|_Gpreempted. We can't
// be in _Grunning when we dropg because then we'd be running
// without an M, but the moment we're in _Gpreempted,
// something could claim this G before we've fully cleaned it
// up. Hence, we set the scan bit to lock down further
// transitions until we can dropg.
casGToPreemptScan(gp, _Grunning, _Gscan|_Gpreempted)
dropg()
casfrom_Gscanstatus(gp, _Gscan|_Gpreempted, _Gpreempted)
schedule()
}
// goyield is like Gosched, but it:
// - emits a GoPreempt trace event instead of a GoSched trace event
// - puts the current G on the runq of the current P instead of the globrunq
func goyield() {
checkTimeouts()
mcall(goyield_m)
}
func goyield_m(gp *g) {
if trace.enabled {
traceGoPreempt()
}
pp := gp.m.p.ptr()
casgstatus(gp, _Grunning, _Grunnable)
dropg()
runqput(pp, gp, false)
schedule()
}
// Finishes execution of the current goroutine.
func goexit1() {
if raceenabled {
racegoend()
}
if trace.enabled {
traceGoEnd()
}
mcall(goexit0)
}
// goexit continuation on g0.
func goexit0(gp *g) {
_g_ := getg()
casgstatus(gp, _Grunning, _Gdead)
if isSystemGoroutine(gp, false) {
atomic.Xadd(&sched.ngsys, -1)
}
gp.m = nil
locked := gp.lockedm != 0
gp.lockedm = 0
_g_.m.lockedg = 0
gp.preemptStop = false
gp.paniconfault = false
gp._defer = nil // should be true already but just in case.
gp._panic = nil // non-nil for Goexit during panic. points at stack-allocated data.
gp.writebuf = nil
gp.waitreason = 0
gp.param = nil
gp.labels = nil
gp.timer = nil
if gcBlackenEnabled != 0 && gp.gcAssistBytes > 0 {
// Flush assist credit to the global pool. This gives
// better information to pacing if the application is
// rapidly creating an exiting goroutines.
scanCredit := int64(gcController.assistWorkPerByte * float64(gp.gcAssistBytes))
atomic.Xaddint64(&gcController.bgScanCredit, scanCredit)
gp.gcAssistBytes = 0
}
dropg()
if GOARCH == "wasm" { // no threads yet on wasm
gfput(_g_.m.p.ptr(), gp)
schedule() // never returns
}
if _g_.m.lockedInt != 0 {
print("invalid m->lockedInt = ", _g_.m.lockedInt, "\n")
throw("internal lockOSThread error")
}
gfput(_g_.m.p.ptr(), gp)
if locked {
// The goroutine may have locked this thread because
// it put it in an unusual kernel state. Kill it
// rather than returning it to the thread pool.
// Return to mstart, which will release the P and exit
// the thread.
if GOOS != "plan9" { // See golang.org/issue/22227.
gogo(&_g_.m.g0.sched)
} else {
// Clear lockedExt on plan9 since we may end up re-using
// this thread.
_g_.m.lockedExt = 0
}
}
schedule()
}
// save updates getg().sched to refer to pc and sp so that a following
// gogo will restore pc and sp.
//
// save must not have write barriers because invoking a write barrier
// can clobber getg().sched.
//
//go:nosplit
//go:nowritebarrierrec
func save(pc, sp uintptr) {
_g_ := getg()
_g_.sched.pc = pc
_g_.sched.sp = sp
_g_.sched.lr = 0
_g_.sched.ret = 0
_g_.sched.g = guintptr(unsafe.Pointer(_g_))
// We need to ensure ctxt is zero, but can't have a write
// barrier here. However, it should always already be zero.
// Assert that.
if _g_.sched.ctxt != nil {
badctxt()
}
}
// The goroutine g is about to enter a system call.
// Record that it's not using the cpu anymore.
// This is called only from the go syscall library and cgocall,
// not from the low-level system calls used by the runtime.
//
// Entersyscall cannot split the stack: the gosave must
// make g->sched refer to the caller's stack segment, because
// entersyscall is going to return immediately after.
//
// Nothing entersyscall calls can split the stack either.
// We cannot safely move the stack during an active call to syscall,
// because we do not know which of the uintptr arguments are
// really pointers (back into the stack).
// In practice, this means that we make the fast path run through
// entersyscall doing no-split things, and the slow path has to use systemstack
// to run bigger things on the system stack.
//
// reentersyscall is the entry point used by cgo callbacks, where explicitly
// saved SP and PC are restored. This is needed when exitsyscall will be called
// from a function further up in the call stack than the parent, as g->syscallsp
// must always point to a valid stack frame. entersyscall below is the normal
// entry point for syscalls, which obtains the SP and PC from the caller.
//
// Syscall tracing:
// At the start of a syscall we emit traceGoSysCall to capture the stack trace.
// If the syscall does not block, that is it, we do not emit any other events.
// If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
// when syscall returns we emit traceGoSysExit and when the goroutine starts running
// (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
// To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
// we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick),
// whoever emits traceGoSysBlock increments p.syscalltick afterwards;
// and we wait for the increment before emitting traceGoSysExit.
// Note that the increment is done even if tracing is not enabled,
// because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
//
//go:nosplit
func reentersyscall(pc, sp uintptr) {
_g_ := getg()
// Disable preemption because during this function g is in Gsyscall status,
// but can have inconsistent g->sched, do not let GC observe it.
_g_.m.locks++
// Entersyscall must not call any function that might split/grow the stack.
// (See details in comment above.)
// Catch calls that might, by replacing the stack guard with something that
// will trip any stack check and leaving a flag to tell newstack to die.
_g_.stackguard0 = stackPreempt
_g_.throwsplit = true
// Leave SP around for GC and traceback.
save(pc, sp)
_g_.syscallsp = sp
_g_.syscallpc = pc
casgstatus(_g_, _Grunning, _Gsyscall)
if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
systemstack(func() {
print("entersyscall inconsistent ", hex(_g_.syscallsp), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
throw("entersyscall")
})
}
if trace.enabled {
systemstack(traceGoSysCall)
// systemstack itself clobbers g.sched.{pc,sp} and we might
// need them later when the G is genuinely blocked in a
// syscall
save(pc, sp)
}
if atomic.Load(&sched.sysmonwait) != 0 {
systemstack(entersyscall_sysmon)
save(pc, sp)
}
if _g_.m.p.ptr().runSafePointFn != 0 {
// runSafePointFn may stack split if run on this stack
systemstack(runSafePointFn)
save(pc, sp)
}
_g_.m.syscalltick = _g_.m.p.ptr().syscalltick
_g_.sysblocktraced = true
pp := _g_.m.p.ptr()
pp.m = 0
_g_.m.oldp.set(pp)
_g_.m.p = 0
atomic.Store(&pp.status, _Psyscall)
if sched.gcwaiting != 0 {
systemstack(entersyscall_gcwait)
save(pc, sp)
}
_g_.m.locks--
}
// Standard syscall entry used by the go syscall library and normal cgo calls.
//
// This is exported via linkname to assembly in the syscall package.
//
//go:nosplit
//go:linkname entersyscall
func entersyscall() {
reentersyscall(getcallerpc(), getcallersp())
}
func entersyscall_sysmon() {
lock(&sched.lock)
if atomic.Load(&sched.sysmonwait) != 0 {
atomic.Store(&sched.sysmonwait, 0)
notewakeup(&sched.sysmonnote)
}
unlock(&sched.lock)
}
func entersyscall_gcwait() {
_g_ := getg()
_p_ := _g_.m.oldp.ptr()
lock(&sched.lock)
if sched.stopwait > 0 && atomic.Cas(&_p_.status, _Psyscall, _Pgcstop) {
if trace.enabled {
traceGoSysBlock(_p_)
traceProcStop(_p_)
}
_p_.syscalltick++
if sched.stopwait--; sched.stopwait == 0 {
notewakeup(&sched.stopnote)
}
}
unlock(&sched.lock)
}
// The same as entersyscall(), but with a hint that the syscall is blocking.
//go:nosplit
func entersyscallblock() {
_g_ := getg()
_g_.m.locks++ // see comment in entersyscall
_g_.throwsplit = true
_g_.stackguard0 = stackPreempt // see comment in entersyscall
_g_.m.syscalltick = _g_.m.p.ptr().syscalltick
_g_.sysblocktraced = true
_g_.m.p.ptr().syscalltick++
// Leave SP around for GC and traceback.
pc := getcallerpc()
sp := getcallersp()
save(pc, sp)
_g_.syscallsp = _g_.sched.sp
_g_.syscallpc = _g_.sched.pc
if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
sp1 := sp
sp2 := _g_.sched.sp
sp3 := _g_.syscallsp
systemstack(func() {
print("entersyscallblock inconsistent ", hex(sp1), " ", hex(sp2), " ", hex(sp3), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
throw("entersyscallblock")
})
}
casgstatus(_g_, _Grunning, _Gsyscall)
if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
systemstack(func() {
print("entersyscallblock inconsistent ", hex(sp), " ", hex(_g_.sched.sp), " ", hex(_g_.syscallsp), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
throw("entersyscallblock")
})
}
systemstack(entersyscallblock_handoff)
// Resave for traceback during blocked call.
save(getcallerpc(), getcallersp())
_g_.m.locks--
}
func entersyscallblock_handoff() {
if trace.enabled {
traceGoSysCall()
traceGoSysBlock(getg().m.p.ptr())
}
handoffp(releasep())
}
// The goroutine g exited its system call.
// Arrange for it to run on a cpu again.
// This is called only from the go syscall library, not
// from the low-level system calls used by the runtime.
//
// Write barriers are not allowed because our P may have been stolen.
//
// This is exported via linkname to assembly in the syscall package.
//
//go:nosplit
//go:nowritebarrierrec
//go:linkname exitsyscall
func exitsyscall() {
_g_ := getg()
_g_.m.locks++ // see comment in entersyscall
if getcallersp() > _g_.syscallsp {
throw("exitsyscall: syscall frame is no longer valid")
}
_g_.waitsince = 0
oldp := _g_.m.oldp.ptr()
_g_.m.oldp = 0
if exitsyscallfast(oldp) {
if trace.enabled {
if oldp != _g_.m.p.ptr() || _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {
systemstack(traceGoStart)
}
}
// There's a cpu for us, so we can run.
_g_.m.p.ptr().syscalltick++
// We need to cas the status and scan before resuming...
casgstatus(_g_, _Gsyscall, _Grunning)
// Garbage collector isn't running (since we are),
// so okay to clear syscallsp.
_g_.syscallsp = 0
_g_.m.locks--
if _g_.preempt {
// restore the preemption request in case we've cleared it in newstack
_g_.stackguard0 = stackPreempt
} else {
// otherwise restore the real _StackGuard, we've spoiled it in entersyscall/entersyscallblock
_g_.stackguard0 = _g_.stack.lo + _StackGuard
}
_g_.throwsplit = false
if sched.disable.user && !schedEnabled(_g_) {
// Scheduling of this goroutine is disabled.
Gosched()
}
return
}
_g_.sysexitticks = 0
if trace.enabled {
// Wait till traceGoSysBlock event is emitted.
// This ensures consistency of the trace (the goroutine is started after it is blocked).
for oldp != nil && oldp.syscalltick == _g_.m.syscalltick {
osyield()
}
// We can't trace syscall exit right now because we don't have a P.
// Tracing code can invoke write barriers that cannot run without a P.
// So instead we remember the syscall exit time and emit the event
// in execute when we have a P.
_g_.sysexitticks = cputicks()
}
_g_.m.locks--
// Call the scheduler.
mcall(exitsyscall0)
// Scheduler returned, so we're allowed to run now.
// Delete the syscallsp information that we left for
// the garbage collector during the system call.
// Must wait until now because until gosched returns
// we don't know for sure that the garbage collector
// is not running.
_g_.syscallsp = 0
_g_.m.p.ptr().syscalltick++
_g_.throwsplit = false
}
//go:nosplit
func exitsyscallfast(oldp *p) bool {
_g_ := getg()
// Freezetheworld sets stopwait but does not retake P's.
if sched.stopwait == freezeStopWait {
return false
}
// Try to re-acquire the last P.
if oldp != nil && oldp.status == _Psyscall && atomic.Cas(&oldp.status, _Psyscall, _Pidle) {
// There's a cpu for us, so we can run.
wirep(oldp)
exitsyscallfast_reacquired()
return true
}
// Try to get any other idle P.
if sched.pidle != 0 {
var ok bool
systemstack(func() {
ok = exitsyscallfast_pidle()
if ok && trace.enabled {
if oldp != nil {
// Wait till traceGoSysBlock event is emitted.
// This ensures consistency of the trace (the goroutine is started after it is blocked).
for oldp.syscalltick == _g_.m.syscalltick {
osyield()
}
}
traceGoSysExit(0)
}
})
if ok {
return true
}
}
return false
}
// exitsyscallfast_reacquired is the exitsyscall path on which this G
// has successfully reacquired the P it was running on before the
// syscall.
//
//go:nosplit
func exitsyscallfast_reacquired() {
_g_ := getg()
if _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {
if trace.enabled {
// The p was retaken and then enter into syscall again (since _g_.m.syscalltick has changed).
// traceGoSysBlock for this syscall was already emitted,
// but here we effectively retake the p from the new syscall running on the same p.
systemstack(func() {
// Denote blocking of the new syscall.
traceGoSysBlock(_g_.m.p.ptr())
// Denote completion of the current syscall.
traceGoSysExit(0)
})
}
_g_.m.p.ptr().syscalltick++
}
}
func exitsyscallfast_pidle() bool {
lock(&sched.lock)
_p_ := pidleget()
if _p_ != nil && atomic.Load(&sched.sysmonwait) != 0 {
atomic.Store(&sched.sysmonwait, 0)
notewakeup(&sched.sysmonnote)
}
unlock(&sched.lock)
if _p_ != nil {
acquirep(_p_)
return true
}
return false
}
// exitsyscall slow path on g0.
// Failed to acquire P, enqueue gp as runnable.
//
//go:nowritebarrierrec
func exitsyscall0(gp *g) {
_g_ := getg()
casgstatus(gp, _Gsyscall, _Grunnable)
dropg()
lock(&sched.lock)
var _p_ *p
if schedEnabled(_g_) {
_p_ = pidleget()
}
if _p_ == nil {
globrunqput(gp)
} else if atomic.Load(&sched.sysmonwait) != 0 {
atomic.Store(&sched.sysmonwait, 0)
notewakeup(&sched.sysmonnote)
}
unlock(&sched.lock)
if _p_ != nil {
acquirep(_p_)
execute(gp, false) // Never returns.
}
if _g_.m.lockedg != 0 {
// Wait until another thread schedules gp and so m again.
stoplockedm()
execute(gp, false) // Never returns.
}
stopm()
schedule() // Never returns.
}
func beforefork() {
gp := getg().m.curg
// Block signals during a fork, so that the child does not run
// a signal handler before exec if a signal is sent to the process
// group. See issue #18600.
gp.m.locks++
msigsave(gp.m)
sigblock()
// This function is called before fork in syscall package.
// Code between fork and exec must not allocate memory nor even try to grow stack.
// Here we spoil g->_StackGuard to reliably detect any attempts to grow stack.
// runtime_AfterFork will undo this in parent process, but not in child.
gp.stackguard0 = stackFork
}
// Called from syscall package before fork.
//go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork
//go:nosplit
func syscall_runtime_BeforeFork() {
systemstack(beforefork)
}
func afterfork() {
gp := getg().m.curg
// See the comments in beforefork.
gp.stackguard0 = gp.stack.lo + _StackGuard
msigrestore(gp.m.sigmask)
gp.m.locks--
}
// Called from syscall package after fork in parent.
//go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork
//go:nosplit
func syscall_runtime_AfterFork() {
systemstack(afterfork)
}
// inForkedChild is true while manipulating signals in the child process.
// This is used to avoid calling libc functions in case we are using vfork.
var inForkedChild bool
// Called from syscall package after fork in child.
// It resets non-sigignored signals to the default handler, and
// restores the signal mask in preparation for the exec.
//
// Because this might be called during a vfork, and therefore may be
// temporarily sharing address space with the parent process, this must
// not change any global variables or calling into C code that may do so.
//
//go:linkname syscall_runtime_AfterForkInChild syscall.runtime_AfterForkInChild
//go:nosplit
//go:nowritebarrierrec
func syscall_runtime_AfterForkInChild() {
// It's OK to change the global variable inForkedChild here
// because we are going to change it back. There is no race here,
// because if we are sharing address space with the parent process,
// then the parent process can not be running concurrently.
inForkedChild = true
clearSignalHandlers()
// When we are the child we are the only thread running,
// so we know that nothing else has changed gp.m.sigmask.
msigrestore(getg().m.sigmask)
inForkedChild = false
}
// Called from syscall package before Exec.
//go:linkname syscall_runtime_BeforeExec syscall.runtime_BeforeExec
func syscall_runtime_BeforeExec() {
// Prevent thread creation during exec.
execLock.lock()
}
// Called from syscall package after Exec.
//go:linkname syscall_runtime_AfterExec syscall.runtime_AfterExec
func syscall_runtime_AfterExec() {
execLock.unlock()
}
// Allocate a new g, with a stack big enough for stacksize bytes.
func malg(stacksize int32) *g {
newg := new(g)
if stacksize >= 0 {
stacksize = round2(_StackSystem + stacksize)
systemstack(func() {
newg.stack = stackalloc(uint32(stacksize))
})
newg.stackguard0 = newg.stack.lo + _StackGuard
newg.stackguard1 = ^uintptr(0)
// Clear the bottom word of the stack. We record g
// there on gsignal stack during VDSO on ARM and ARM64.
*(*uintptr)(unsafe.Pointer(newg.stack.lo)) = 0
}
return newg
}
// Create a new g running fn with siz bytes of arguments.
// Put it on the queue of g's waiting to run.
// The compiler turns a go statement into a call to this.
//
// The stack layout of this call is unusual: it assumes that the
// arguments to pass to fn are on the stack sequentially immediately
// after &fn. Hence, they are logically part of newproc's argument
// frame, even though they don't appear in its signature (and can't
// because their types differ between call sites).
//
// This must be nosplit because this stack layout means there are
// untyped arguments in newproc's argument frame. Stack copies won't
// be able to adjust them and stack splits won't be able to copy them.
//
//go:nosplit
func newproc(siz int32, fn *funcval) {
argp := add(unsafe.Pointer(&fn), sys.PtrSize)
gp := getg()
pc := getcallerpc()
systemstack(func() {
newg := newproc1(fn, argp, siz, gp, pc)
_p_ := getg().m.p.ptr()
runqput(_p_, newg, true)
if mainStarted {
wakep()
}
})
}
// Create a new g in state _Grunnable, starting at fn, with narg bytes
// of arguments starting at argp. callerpc is the address of the go
// statement that created this. The caller is responsible for adding
// the new g to the scheduler.
//
// This must run on the system stack because it's the continuation of
// newproc, which cannot split the stack.
//
//go:systemstack
func newproc1(fn *funcval, argp unsafe.Pointer, narg int32, callergp *g, callerpc uintptr) *g {
_g_ := getg()
if fn == nil {
_g_.m.throwing = -1 // do not dump full stacks
throw("go of nil func value")
}
acquirem() // disable preemption because it can be holding p in a local var
siz := narg
siz = (siz + 7) &^ 7
// We could allocate a larger initial stack if necessary.
// Not worth it: this is almost always an error.
// 4*sizeof(uintreg): extra space added below
// sizeof(uintreg): caller's LR (arm) or return address (x86, in gostartcall).
if siz >= _StackMin-4*sys.RegSize-sys.RegSize {
throw("newproc: function arguments too large for new goroutine")
}
_p_ := _g_.m.p.ptr()
newg := gfget(_p_)
if newg == nil {
newg = malg(_StackMin)
casgstatus(newg, _Gidle, _Gdead)
allgadd(newg) // publishes with a g->status of Gdead so GC scanner doesn't look at uninitialized stack.
}
if newg.stack.hi == 0 {
throw("newproc1: newg missing stack")
}
if readgstatus(newg) != _Gdead {
throw("newproc1: new g is not Gdead")
}
totalSize := 4*sys.RegSize + uintptr(siz) + sys.MinFrameSize // extra space in case of reads slightly beyond frame
totalSize += -totalSize & (sys.SpAlign - 1) // align to spAlign
sp := newg.stack.hi - totalSize
spArg := sp
if usesLR {
// caller's LR
*(*uintptr)(unsafe.Pointer(sp)) = 0
prepGoExitFrame(sp)
spArg += sys.MinFrameSize
}
if narg > 0 {
memmove(unsafe.Pointer(spArg), argp, uintptr(narg))
// This is a stack-to-stack copy. If write barriers
// are enabled and the source stack is grey (the
// destination is always black), then perform a
// barrier copy. We do this *after* the memmove
// because the destination stack may have garbage on
// it.
if writeBarrier.needed && !_g_.m.curg.gcscandone {
f := findfunc(fn.fn)
stkmap := (*stackmap)(funcdata(f, _FUNCDATA_ArgsPointerMaps))
if stkmap.nbit > 0 {
// We're in the prologue, so it's always stack map index 0.
bv := stackmapdata(stkmap, 0)
bulkBarrierBitmap(spArg, spArg, uintptr(bv.n)*sys.PtrSize, 0, bv.bytedata)
}
}
}
memclrNoHeapPointers(unsafe.Pointer(&newg.sched), unsafe.Sizeof(newg.sched))
newg.sched.sp = sp
newg.stktopsp = sp
newg.sched.pc = funcPC(goexit) + sys.PCQuantum // +PCQuantum so that previous instruction is in same function
newg.sched.g = guintptr(unsafe.Pointer(newg))
gostartcallfn(&newg.sched, fn)
newg.gopc = callerpc
newg.ancestors = saveAncestors(callergp)
newg.startpc = fn.fn
if _g_.m.curg != nil {
newg.labels = _g_.m.curg.labels
}
if isSystemGoroutine(newg, false) {
atomic.Xadd(&sched.ngsys, +1)
}
casgstatus(newg, _Gdead, _Grunnable)
if _p_.goidcache == _p_.goidcacheend {
// Sched.goidgen is the last allocated id,
// this batch must be [sched.goidgen+1, sched.goidgen+GoidCacheBatch].
// At startup sched.goidgen=0, so main goroutine receives goid=1.
_p_.goidcache = atomic.Xadd64(&sched.goidgen, _GoidCacheBatch)
_p_.goidcache -= _GoidCacheBatch - 1
_p_.goidcacheend = _p_.goidcache + _GoidCacheBatch
}
newg.goid = int64(_p_.goidcache)
_p_.goidcache++
if raceenabled {
newg.racectx = racegostart(callerpc)
}
if trace.enabled {
traceGoCreate(newg, newg.startpc)
}
releasem(_g_.m)
return newg
}
// saveAncestors copies previous ancestors of the given caller g and
// includes infor for the current caller into a new set of tracebacks for
// a g being created.
func saveAncestors(callergp *g) *[]ancestorInfo {
// Copy all prior info, except for the root goroutine (goid 0).
if debug.tracebackancestors <= 0 || callergp.goid == 0 {
return nil
}
var callerAncestors []ancestorInfo
if callergp.ancestors != nil {
callerAncestors = *callergp.ancestors
}
n := int32(len(callerAncestors)) + 1
if n > debug.tracebackancestors {
n = debug.tracebackancestors
}
ancestors := make([]ancestorInfo, n)
copy(ancestors[1:], callerAncestors)
var pcs [_TracebackMaxFrames]uintptr
npcs := gcallers(callergp, 0, pcs[:])
ipcs := make([]uintptr, npcs)
copy(ipcs, pcs[:])
ancestors[0] = ancestorInfo{
pcs: ipcs,
goid: callergp.goid,
gopc: callergp.gopc,
}
ancestorsp := new([]ancestorInfo)
*ancestorsp = ancestors
return ancestorsp
}
// Put on gfree list.
// If local list is too long, transfer a batch to the global list.
func gfput(_p_ *p, gp *g) {
if readgstatus(gp) != _Gdead {
throw("gfput: bad status (not Gdead)")
}
stksize := gp.stack.hi - gp.stack.lo
if stksize != _FixedStack {
// non-standard stack size - free it.
stackfree(gp.stack)
gp.stack.lo = 0
gp.stack.hi = 0
gp.stackguard0 = 0
}
_p_.gFree.push(gp)
_p_.gFree.n++
if _p_.gFree.n >= 64 {
lock(&sched.gFree.lock)
for _p_.gFree.n >= 32 {
_p_.gFree.n--
gp = _p_.gF | __label__pos | 0.908712 |
WorldsEndless: i.
Marionneaux: I guess basically said. why even bother with get_template_part . what value does THAT have
Voorheis: LindsayMac: that seems a fairly common question!
Annese: It would be nice if get_template_part allowed one to p*** an arrary
Fini: Would be nice if it just did the BASICS of what an include did! hah
Cavez: Sterndata: seriously. get_template_part is more and more confusing to me now
Oechsle: We have “people” pages currently updated by site admins or editors. We want to allow users whose ID matches the particular “people” page to be able to edit that person in Drupal I’ve done this in user profiles, but this is the setup I have now.. What WordPress functions/plugins could help me achieve something like this?
Englert: So basically users will have a second “user profile” page. ugh. The more I explain this, the stupider the setup sounds.
Fronk: WorldsEndless: there are a number of front end user profile plugins out there
Delbene: WorldsEndless: its not a stupid setup. I am actually creating a similar basic thing except admins dont create the user pages, they are created automatically when a user signs up for an account
Sharkey: LindsayMac: that sounds ideal. admin OR user created
Chiola: LindsayMac: Currently our “People” pages are public-facing, sortable, and accessed by related sites via JSON-API
Corcoran: WorldsEndless: well. those are two pretty different scenarios. What would the purpose/ benefit be of having an admin create the user page?
Twiner: Hold on. bathroom run really quick brb
Teeple: LindsayMac: These pages represent faculty in a college, so the way it’s been done previously is entirely created/updated by web maintainers; but now we want to add functionality so that users can edit their own page they will log in via CAS. Hence my current efforts.
Goosby: Im running wordpres 4.3 and i went on the admin page and changed the ip address in there, i would guess this change is saved somewhere in the wp-admin directory, but there are so many files that i cant seem to find it
Krzynowek: R-Z: What page did you change it on?
Hoxsie: R-Z: “General Settings” maybe?
Service: WordPress admin page, there is this page with 2 ip adresses
Steinbrook: So you mean URL, instead of IP?
Loudon: Well url then, im running it on a local machine so
Thaxton: R-Z: okay. IN general, almost everything you do on the site itself is on the database, not in files. Files are usually only written for things like media uploads
Woodly: Ah yeah, general settings, wordpress address url and site addressurl are now incorrect
Sadin: R-Z: So in your database you want the wp_options table, and there is a siteurl and a home field
Shanholtzer: The database values will be over-ridden, however if you have defined these somewhere in your php files like wp-config.php
Elhaj: R-Z: Normally, though, these are not in your php files, and it’s just the database values giving directions
Biancardi: Ah yeah thanks, i think i got it
Frankenstein: R-Z: What was the actual problem? Site migration?
Beresford: Just playing with my rasberry pi, trying to get my site publicly visible
Rorie: WorldsEndless: ok back sorry.
Herbison: WorldsEndless: So in realtiy, you will like have a bunch of “people” who are not actually users, in that case.
Wujcik: LindsayMac: Yeah, except for the need to edit their one particular page
Vanhee: WorldsEndless: wait huh?
Gulotta: In the past, a faculty page has been made for them by site maintainers. Now we want to add the ability for them to log in via CAS and edit it themselves.
Troidl: WorldsEndless: from what I understand, not EVERY person in the directory is going to have a user account with the website. i mean, its typically pretty impossible to get every teacher/administrator to create an account on a site adn add in their info
Weingartner: WorldsEndless: i understand that. What i’m trying to discern is your ACTUAL requirements for the content. | __label__pos | 0.796258 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
i m using Oracle 9i.
I m fetching data from a cursor into an array :
FETCH contract_cur
BULK COLLECT INTO l_contract ;
But now i want to "convert" this *l_contract* into a CLOB variable *l_clob*
Is there an easy way to do that?
Or otherwise, how do i convertthe rows from a SELECT statement into one single CLOB Variable ?
thanks
EDIT : i forgot to mention its an array of %ROWTYPE, not just one column.
share|improve this question
2 Answers 2
up vote 2 down vote accepted
What an ugly thing to do.
Is it all character data, or do you have numeric and/or date/time values in there too ? If so what format do you want to use for those datatypes when you convert them to strings.
You also may need to think about field and record delimiters.
Have you considered XML ?
declare
v_clob clob;
v_xml xmltype;
begin
select xmlagg(XMLELEMENT("test",xmlforest(id,val)))
into v_xml
from test;
select v_xml.getclobval
into v_clob
from dual;
dbms_output.put_line(v_clob);
end;
/
share|improve this answer
you can loop through your array and build the CLOB as you go:
SQL> DECLARE
2 TYPE tab_vc IS TABLE OF VARCHAR2(4000);
3 l_contract tab_vc;
4 l_clob CLOB;
5 BEGIN
6 dbms_lob.createtemporary (l_clob, TRUE);
7 SELECT to_char(dbms_random.STRING('a', 1000)) BULK COLLECT
8 INTO l_contract
9 FROM dual
10 CONNECT BY LEVEL <= 100;
11 FOR i IN 1..l_contract.count LOOP
12 dbms_lob.writeappend(l_clob,
13 length(l_contract(i)),
14 l_contract(i));
15 END LOOP;
16 -- your code here
17 dbms_lob.freetemporary(l_clob);
18 END;
19 /
PL/SQL procedure successfully completed
If you don't use l_contract for anything else you can build the CLOB directly from the cursor loop without the array step, it will save memory and will probably be faster:
SQL> DECLARE
2 l_clob CLOB;
3 BEGIN
4 dbms_lob.createtemporary (l_clob, TRUE);
5 FOR cc IN ( SELECT to_char(dbms_random.STRING('a', 1000)) txt
6 FROM dual
7 CONNECT BY LEVEL <= 100) LOOP
8 dbms_lob.writeappend(l_clob,
9 length(cc.txt),
10 cc.txt);
11 END LOOP;
12 -- your code here
13 dbms_lob.freetemporary(l_clob);
14 END;
15 /
PL/SQL procedure successfully completed
share|improve this answer
thanks for your answer, but i forgot to mention i have multiple columns. like select col1, col2 ... so its an array of %ROWTYPE – guigui42 Feb 19 '10 at 16:37
@guigui42: you can concatenate the columns with dbms_lob.writeappend(l_clob, length(l_contract(i).col1||l_contract(i).col2...), l_contract(i).col1||l_contract(i).col2...) – Vincent Malgrat Feb 19 '10 at 16:48
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.555641 |
LinkedHashSet Example in Java
LinkedHashSet extends the HashSet and is the implementation of Hash Table and Linked List. LinkedHashSet maintains a doubly-linked list through all of its elements. This helps in maintaining the insertion-order of the elements. Since LinkedHashSet extends the HashSet, they have all the basic properties of the HashSet.
Important Points to Rememeber about LinkedHashSet
1. Order of Insertion is maintained by the LinkedHashSet
2. Even if you try to reinsert the same element, the insertion order is not affected.
3. Null value is accepted by the LinkedHashSet
Lets see LinkedHashSet Example in Java.
LinkedHashSet Example in Java
In this example we will see
1. How to Create LinkedHashSet
2. Add Elements to it.
3. Add null Value to LinkedHashSet
4. Add the same value again to the LinkedHashSet and see if its allowed.
Output
When you print the linkedHashSet, you will see that the Null value is accepted by the Set. But similar to HashSet the same value added again is not accepted. Neither is the order of insertion affected by the same value inserted again.
It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn | __label__pos | 0.98262 |
HTML <figure> and <figcaption> Tag
In this tutorial, we will learn about the HTML <figure> and <figcaption> tags with the help of examples.
HTML <figure> Tag
The HTML <figure> tag is a semantic tag that represents self-contained graphical content such as illustrations, diagrams, photos, etc. For example,
<figure>
<img src="https://cdn.programiz.com/sites/tutorial2program/files/pc_logo.svg" width = "300" height = "247">
</figure>
Browser Output
HTML Figure element with an image
HTML <figcaption> Tag
The HTML <figcaption> tag is used to define the caption for a figure. It is placed as a child element of the <figure> tag along with the content. For example,
<figure>
<img src="https://cdn.programiz.com/sites/tutorial2program/files/pc_logo.svg" width = "300" height = "247">
<figcaption>Learn to code for free.</figcaption>
</figure>
Browser Output
HTML Figure element with an image and a caption | __label__pos | 0.939032 |
1,102,515 Community Members
Winrunner- TSL Script
Member Avatar
b1izzard
Junior Poster
118 posts since Jun 2009
Reputation Points: 0 [?]
Q&As Helped to Solve: 14 [?]
Skill Endorsements: 0 [?]
0
Hi i wrote the following script in winrunner to check the boundary range:
1. VB Program contains (text box,Calculate and clear button)
Calculate button Coding:
Dim a As Integer
a = Val(Text1.Text)
If (a > 100) Then
MsgBox ("value exceeds the maximum range")
ElseIf (a < 50) Then
MsgBox ("value is less than the minimum range")
ElseIf (a >= 50 & a <= 100) Then
MsgBox ("value is within the range")
End If
2.In winrunner Tools-> GUI Map Editor -> Learn -> I pointed it to the Boundary Range program and learnt all the GUI properties and saved in some abc.GUI file
3.Tools -> Data Table ->created a new table with the parameter "Numbers" and saved as "table.xls"
Winrunner Script:
path="C:\Program Files\Mercury Interactive\WinRunner\boundary\table.xls";
ddt_open(path,DDT_MODE_READWRITE);
ddt_get_row_count(path,mycount);
#Boundary Range
win_activate("Boundary Range");
for(i=1;i<=mycount;i++)
{
set_window("Boundary Range");
edit_set("ThunderTextBox",ddt_val_by_row("C:\Program Files\Mercury Interactive\WinRunner\boundary\table.xls",i,"Numbers"));
edit_get_text("ThunderTextBox",val);
button_press("Caculate");
if(val<50)
report_msg("val is less tan minimum range");
else if (val>100)
report_msg("val is greater than maximum range");
else if(val>=50 && val <=100)
report_msg("value is within the range");
set_window("Boundary Range_0");
button_press("Clear");
}
tl_step("boundary Testing ",pass,"Successfull")
#win_close("Boundary Range");
ddt_close(path);
The script does not execute correctly, I mean it does not retrieve the values from the table.xls file and check it in the Boundary range program. But In the results page I get pass, successful message.
You
This article has been dead for over three months: Start a new discussion instead
Post:
Start New Discussion
View similar articles that have also been tagged: | __label__pos | 0.939914 |
Tutorial hero image
Lesson icon
How to Create Complex Layouts with Sencha Touch
4 min read
Originally published February 09, 2015
Sencha Touch has a powerful and adaptable layout system, but it's also quite simple to use (if not a bit confusing at first). This tutorial will focus on explaining a little bit about how the Sencha Touch layout system works and then we will walk through building a specific complex layout.
1. A Quick Introduction to Layouts in Sencha Touch
Sencha Touch uses web technologies, so one might assume that are layouts would be created with HTML and CSS, this is not the case though. A layout in Sencha Touch is defined by the components that are used and the layout configuration that is assigned to it.
You may be familiar with components in Sencha touch that can be used to nest other components inside of them like Container and Panel. These can be assigned any of the following layout types:
Depending on the layout you assign, the components you nest inside of the container like lists, images, forms and even more containers will behave differently. Using the vbox layout for example will cause components to be added underneath each other. You could specify a vbox layout with the following code in the configuration of the container:
layout: {
type: 'vbox';
}
For a full explanation of what all the different layouts do, you should take a look at Using Layouts in Sencha Touch. Another important concept is flex. With layouts like vbox and hbox where multiple components occupy the same container, flex determines how much of the space each component should occupy. Take the following as an example:
{
xtype: 'map',
flex: 2
},
{
xtype: 'list',
flex: 1
}
Flex works as a ratio so with the above example the map would take up 2/3rds and the list would take up 1/3rd. If we were to change the flex to 5 and 2 then the map would take up 5/7ths and the list would take up 2/7ths. If we were thinking in a CSS mindset we might try to use percentage widths or heights to achieve this, but this is not what you want to do. CSS still has its place in Sencha Touch though, typically it will be used to make style changes not layout changes. If you're confused about when to use CSS and when to do things through the Sencha Touch framework, I would recommend getting your app as close to what you want by defining it through the framework and then make the rest of the changes with CSS.
2. Building a Home Page Menu in Sencha Touch
Commonly navigation in applications is achieved through tabs or sliding menus. We can get a little more creative than that though, sometimes you might want to do something a little outside of the norm that involves creating a more complicated layout. In this example we will be creating a layout that looks like this:
Complex Layout Example
As I mentioned above there is a few different layout options in Sencha Touch, but when building complex layouts like this you will able to create just about anything you want with a combination of vertical box and horizontal box. Vertical box places components underneath each other, horizontal box places them side by side. You can then nest these inside of each other, a horizontal box inside of a vertical box which is inside of another horizontal box for example, to achieve all sorts of layouts. Once you get your head around, it will be easy to see how to break down any layout. This is how we would achieve the above layout:
Complex Layout Example
Now let's take a look at how we might code that.
1. First set your container to a vbox layout, and add four containers as children:
Ext.define('MyApp.view.Main', {
extend: 'Ext.Container',
xtype: 'main',
config: {
layout: {
type: 'vbox',
},
items: [
{
xtype: 'container',
layout: {
type: 'fit',
},
flex: 3,
},
{
xtype: 'container',
layout: {
type: 'hbox',
},
flex: 2,
},
{
xtype: 'container',
layout: {
type: 'hbox',
},
flex: 2,
},
{
xtype: 'container',
layout: {
type: 'hbox',
},
flex: 2,
},
],
},
});
The first container will be our header / logo area, and the next three will contain our icons which we will use to allow our users to navigate to other views. We also want to split the bottom three containers into another two horizontal boxes (since we will have two side by side icons in each of the containers), so we are giving each of these a layout of hbox.
We are also making use of the flex configuration here. We have four containers, and the sum of the flex is 9 this means then that our first container with a flex of 3 will take up a third of the space (3/9) and the rest of the containers will each take up 2/9ths of the space.
1. Add two more containers to each of the hbox containers
Now we are going to split those horizontal boxes up into two separate containers. So for each of the last three containers, add the following items:
items: [
{
xtype: 'container',
cls: 'icon-container',
layout: {
type: 'vbox',
align: 'center',
},
items: [
{
xtype: 'image',
mode: 'image',
src: 'resources/images/icon.png',
},
{
html: 'Title 1',
},
],
},
{
xtype: 'container',
cls: 'icon-container',
layout: {
type: 'vbox',
align: 'center',
},
items: [
{
xtype: 'image',
mode: 'image',
src: 'resources/images/icon.png',
},
{
html: 'Title 2',
},
],
},
];
I've taken things even further here and have added another vbox layout inside the hbox layout. This will allow me to add an image, and then some HTML underneath that image. We could keep drilling down further and further but that's as far as we need to take it for this example!
1. Add some styling
You will notice in the last code block that I gave the vbox containers a class of icon-container, to finish things off we're going to add the following styles to our app.scss file (don't forget to compile with compass after the change has been made!)
.icon-container {
width: 50%;
font-size: 0.8em;
text-align: center;
padding-top: 1em;
}
And there you have it. Once you've finished you should have a layout that looks like the image above. As far as nesting containers within containers within containers, this tutorial takes it about as far as you will ever need to go. The great thing about Sencha's layout system is that this layout will now easily scale to any screen size that we need.
If you enjoyed this article, feel free to share it with others! | __label__pos | 0.976516 |
How to Implement Simple Encryption in Python
In this article, we’ll learn how to implement simple encryption in Python.
Source Code
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
from sys import stdout
a = int(input('Please input numbers:\n'))
# Get all digits in the number
aa = []
aa.append(a % 10)
aa.append(a % 100 / 10)
aa.append(a % 1000 / 100)
aa.append(a / 1000)
# Step 1:
for i in range(4):
aa[i] += 3
aa[i] %= 8
# Step 2:
for i in range(2):
aa[i], aa[3 - i] = aa[3 - i], aa[i]
# Step 3:
for i in range(3, -1, -1):
stdout.write(str(aa[i]))
Output:
Please input numbers:
1234
76.45.344.234
Now we can see that the original number is 1234 and the encrypted number is 76.45.344.234.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Advertisement | __label__pos | 0.989718 |
Mathematics
Grade9
Easy
Question
How can you prove perpendicular bisector theorem?
1. By using properties of triangle
2. By midsegment theorem
3. By Point of concurrency
4. By circumcenter
The correct answer is: By using properties of triangle
We can prove perpendicular bisector theorem by using properties of triangle.
Hence, option(a) is the correct option.
Related Questions to study
card img
With Turito Academy.
card img
With Turito Foundation.
card img
Get an Expert Advice From Turito.
Turito Academy
card img
With Turito Academy.
Test Prep
card img
With Turito Foundation. | __label__pos | 0.988073 |
How to Fit a Curve to Power-law Distributed Data in Python
5/5 - (1 vote)
In this tutorial, you’ll learn how to generate synthetic data that follows a power-law distribution, plot its cumulative distribution function (CDF), and fit a power-law curve to this CDF using Python. This process is useful for analyzing datasets that follow power-law distributions, which are common in natural and social phenomena.
Prerequisites
Ensure you have Python installed, along with the numpy, matplotlib, and scipy libraries. If not, you can install them using pip:
pip install numpy matplotlib scipy
Step 1: Generate Power-law Distributed Data
First, we’ll generate a dataset that follows a power-law distribution using numpy.
import numpy as np
# Parameters
alpha = 3.0 # Exponent of the distribution
size = 1000 # Number of data points
# Generate power-law distributed data
data = np.random.power(a=alpha, size=size)
👉 How to Generate and Plot Random Samples from a Power-Law Distribution in Python?
The data looks like this:
Let’s make some sense out of it and plot it in 2D space: 📈
Step 2: Plot the Cumulative Distribution Function (CDF)
Next, we’ll plot the CDF of the generated data on a log-log scale to visualize its power-law distribution.
import matplotlib.pyplot as plt
# Prepare data for the CDF plot
sorted_data = np.sort(data)
yvals = np.arange(1, len(sorted_data) + 1) / float(len(sorted_data))
# Plot the CDF
plt.plot(sorted_data, yvals, marker='.', linestyle='none', color='blue')
plt.xlabel('Value')
plt.ylabel('Cumulative Frequency')
plt.title('CDF of Power-law Distributed Data')
plt.xscale('log')
plt.yscale('log')
plt.grid(True, which="both", ls="--")
plt.show()
The plot:
Step 3: Fit a Power-law Curve to the CDF
To understand the underlying power-law distribution better, we fit a curve to the CDF using the curve_fit function from scipy.optimize.
from scipy.optimize import curve_fit
# Power-law fitting function
def power_law_fit(x, a, b):
return a * np.power(x, b)
# Fit the power-law curve
params, covariance = curve_fit(power_law_fit, sorted_data, yvals)
# Generate fitted values
fitted_yvals = power_law_fit(sorted_data, *params)
Step 4: Plot the Fitted Curve with the CDF
Finally, we’ll overlay the fitted power-law curve on the original CDF plot to visually assess the fit.
# Plot the original CDF and the fitted power-law curve
plt.plot(sorted_data, yvals, marker='.', linestyle='none', color='blue', label='Original Data')
plt.plot(sorted_data, fitted_yvals, 'r-', label='Fitted Power-law Curve')
plt.xlabel('Value')
plt.ylabel('Cumulative Frequency')
plt.title('CDF with Fitted Power-law Curve')
plt.xscale('log')
plt.yscale('log')
plt.grid(True, which="both", ls="--")
plt.legend()
plt.show()
Voilà! 👇
This visualization helps in assessing the accuracy of the power-law model in describing the distribution of the data.
Recommended article:
👉 Visualizing Wealth: Plotting the Net Worth of the World’s Richest in Log/Log Space | __label__pos | 0.991628 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS7009606 B2
Publication typeGrant
Application numberUS 10/386,547
Publication date7 Mar 2006
Filing date13 Mar 2003
Priority date13 Sep 2000
Fee statusLapsed
Also published asUS20040004615, WO2002023485A1
Publication number10386547, 386547, US 7009606 B2, US 7009606B2, US-B2-7009606, US7009606 B2, US7009606B2
InventorsMasaki Hiraga, Kensuke Habuka
Original AssigneeMonolith Co., Ltd.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and apparatus for generating pseudo-three-dimensional images
US 7009606 B2
Abstract
Provided is a pseudo-three-dimensional image generating technique by which a further increased large amount of CG images is generated and drawn. A pseudo-three-dimensional image generating apparatus includes a first processing unit and a second processing unit. When moving pictures that contain a three-dimensional object model are generated, the first processing unit generates key frames, selected at certain or varied intervals, of the moving pictures by CG. The second processing unit interpolates these key frames by an image processing, so as to generate intermediate frames. The timing of the key frames and intermediate frames are adjusted in a buffer memory and are then outputted to a display apparatus.
Images(12)
Previous page
Next page
Claims(39)
1. A pseudo-three-dimensional image generating method, the method including:
assigning a point of interest to a first CG image that contains a three-dimensional object model; and
computing a corresponding point that corresponds to the point of interest in a second CG image that contains the three-dimensional object model,
wherein said computing derives the corresponding point in such a manner that image coordinates of the point of interest serve as processing starting information and wherein said commuting is such that the three-dimensional object model is divided into layers according to depth, and the corresponding point that corresponds to the point of interest is computed for each of the layers.
2. A method according to claim 1, wherein, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation Mi has been performed is denoted by QI, the point of interest is denoted by p and the corresponding point is denoted by p′, said computing is such that
p′=Q 2 M 2 M 1 −1 Q 1 −1 p
is calculated so as to derive the point of interest.
3. A method according to claim 2, wherein said computing includes determining occlusion, and if it is determined that the corresponding point is invisible in the second CG image due to occlusion, outputting data indicating that there does not exist a point on the second CG image to be corresponded to the point of interest.
4. A method according to claim 2, wherein said computing includes determining occlusion, and if it is determined that the corresponding point is invisible in the second CG image due to occlusion, the corresponding point is corresponded to the point of interest on the assumption that the corresponding point virtually exists on the second CG image.
5. A method according to claim 2, further including determining occlusion in the second CG image and the corresponding point that corresponds to the point of interest is computed for each of the layers if it is determined that there exists a visible region in the first CG image while there exists an invisible region in the second CG image due to occlusion.
6. A method according to claim 2, wherein the first CG image and the second CG image are respectively constituted by pixels having depth information or an aggregate thereof, and three dimension information is maintained by said computing of
p′=Q 2 M 2 M 1 −1 Q 1 −1 p.
7. A method according to claim 1, wherein said computing includes determining occlusion, and if it is determined that the corresponding point is invisible in the second CG image due to occlusion, outputting data indicating that there does not exist a point on the second CG image to be corresponded to the point of interest.
8. A method according to claim 1, wherein said computing includes determining occlusion, and if it is determined that the corresponding point is invisible in the second CG image due to occlusion, the corresponding point is corresponded to the point of interest on the assumption that the corresponding point virtually exists on the second CG image.
9. A method according to claim 1, further including generating an intermediate image of the first CG image and the second CG image by interpolating image positions of the point of interest and the corresponding point.
10. A method according to claim 1, further including determining occlusion in the second CG image and the corresponding point that corresponds to the point of interest is computed for each of the layers if it is determined that there exists a visible region in the first CG image while there exists an invisible region in the second CG image due to occlusion.
11. A method according to claim 10, further including generating an intermediate image of the first CG image and the second CG image for each of the layers by interpolating image positions of the point of interest and the corresponding point, and synthesizing the intermediate image generated for each of the layers by taking overlap in a depth direction into account.
12. A method according to claim 1, further including generating an intermediate image of the first CG image and the second CG image for each of the layers by interpolating image positions of the point of interest and the corresponding point, and synthesizing the intermediate image generated for each of the layers by taking overlap in a depth direction into account.
13. A pseudo-three-dimensional image generating method, the method including:
assigning a point of interest to a three-dimensional object model;
computing a first corresponding point which corresponds to the point of interest in a first CG image that contains the three-dimensional object model;
computing a second corresponding point which corresponds to the point of interest in a second CG image that contains the three-dimensional object model; and
recording the first corresponding point and the second corresponding point in an associated manner,
wherein said computing a first corresponding point and said computing a second corresponding point are such that the three-dimensional object model is divided into layers according to depth, and the first corresponding point and the second corresponding point that correspond to the point of interest are respectively computed layer by layer.
14. A method according to claim 13, wherein, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation Mi has been performed is denoted by QI, the point of interest is denoted by p and the corresponding point is denoted by p′, said computing a first corresponding point is such that
p 1 =Q 1 M 1 p
is calculated, and said computing a second corresponding point is such that
p 2 =Q 2 M 2 p
is calculated, and said recording is such that at least a pair of data item, (p1, p2), are recorded.
15. A method according to claim 14, further including generating an intermediate image of the first CG image and the second CG image by interpolating image positions of the first corresponding point and the second corresponding point.
16. A method according to claim 14, further including judging the presence of occlusion in the first and second CG image and the first corresponding point and the second corresponding point that correspond to the point of interest are computed layer by layer if it is judged that there exists an invisible region in at least one of the first and second CG images due to occlusion.
17. A method according to claim 13, further including generating an intermediate image of the first CG image and the second CG image by interpolating image positions of the first corresponding point and the second corresponding point.
18. A method according to claim 13, further including judging the presence of occlusion in the first and second CG image and the first corresponding point and the second corresponding point that correspond to the point of interest are computed layer by layer if it is judged that there exists an invisible region in at least one of the first and second CG images due to occlusion.
19. A method according to claim 18, further including generating an intermediate image of the first CG image and the second CG image for each of the layers by interpolating image positions of the first corresponding point and the second corresponding point, and synthesizing the intermediate image generated for each of the layers by taking overlap in a depth direction into account.
20. A method according to claim 13, further including generating an intermediate image of the first CG image and the second CG image for each of the layers by interpolating image positions of the first corresponding point and the second corresponding point, and synthesizing the intermediate image generated for each of the layers by taking overlap in a depth direction into account.
21. A pseudo-three-dimensional image generating method, the method including:
assigning a point of interest to a first CG image that contains a three-dimensional object model; and
computing a corresponding point that corresponds to the point of interest in a second CG image that contains the three-dimensional object model,
wherein said computing is such that an image coordinate of the point of interest is utilized as processing starting information, the first CG image and the second CG image are respectively divided into layers according to depth, and the corresponding point is derived by taking the layer, to which the point of interest and the corresponding point belong, into account as a condition.
22. A method according to claim 21, wherein said computing is such that the corresponding point that corresponds to the point of interest is computed on condition that both the point of interest and the corresponding point belong to a same layer.
23. A method according to claim 22, wherein, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation MI has been performed is denoted by QI, the point of interest is denoted by p and the corresponding point is denoted by p′, said computing is such that
p′=Q 2 M 2 M 1 −1 Q 1 −1 p
is calculated so as to derive the point of interest.
24. A method according to claim 21, wherein, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by MI, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation Mi has been performed is denoted by Qi, the point of interest is denoted by p and the corresponding point is denoted by p′, said computing is such that
p′=Q 2 M 2 M 1 −1 Q 1 −1 p
is calculated so as to derive the point of interest.
25. A method according to claim 21, further including generating an intermediate image of the first CG image and the second CG image by interpolating image positions of the point of interest and the corresponding point.
26. A pseudo-three-dimensional image generating method, the method including:
assigning a point of interest to an object model;
computing a first corresponding point that corresponds to the point of interest in a first CG image that contains the three-dimensional object model;
computing a second corresponding point that corresponds to the point of interest in a second CG image that contains the three-dimensional object model; and
recording the first corresponding point and the second corresponding point in an associated manner,
wherein said recording in an associated manner is such that the first CG image and the second CG image are respectively divided into layers according to depth, and the first corresponding point is associated with the second corresponding point based on a condition of a layer or layers that the first corresponding point and the second corresponding point belong to.
27. A method according to claim 26, wherein said recording is such that the first corresponding point and the second corresponding point are recorded in an associated manner on condition that both the first corresponding point and the second corresponding point belong to a same layer.
28. A method according to claim 27, wherein, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation Mi has been performed is denoted by Qi, the point of interest is denoted by p and the corresponding point is denoted by p′, said computing a first corresponding point is such that
p 1 =Q 1 M 1 p
is calculated, said computing a second corresponding point is such that p2=Q2M2p, and said recording is such that at least a pair of data item, (p1, p2), are recorded.
29. A method according to claim 26, wherein, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation MI has been performed is denoted by Qi, the point of interest is denoted by p and the corresponding point is denoted by p′, said computing a first corresponding point is such that
p 1 =Q 1 M 1 p
is calculated, said computing a second corresponding point is such that p2=Q2M2p, and said recording is such that at least a pair of data item, (p1, p2), are recorded.
30. A method according to claim 26, further including generating an intermediate image of the first CG image and the second CG image by interpolating image positions of the first corresponding point and the second corresponding point.
31. A pseudo-three-dimensional image generating apparatus which renders, by CG, moving pictures that contain a three-dimensional object model, the apparatus comprising:
a first processing unit which draws key frames selected from a plurality of image frames that constitute the moving pictures, by using a direct method that copes with description of the three-dimensional object model; and
a second processing unit which generates intermediate images by interpolating the key frames, wherein said second processing unit comprises:
a point-of-interest setting unit which assigns a point of interest to a first key frame that contains the three-dimensional object model; and
a corresponding-point computing unit which computes a corresponding point that corresponds to the point of interest in a second key frame that contains the three-dimensional object model, in such a manner that image coordinates of the point of interest serve as processing starting information, and
wherein the intermediate frame is generated based on a positional relation between the point of interest and the corresponding point and wherein said corresponding-point computing unit divides the three-dimensional object model into layers according to depth, and computes the corresponding point that corresponds to the point of interest layer by layer.
32. Apparatus according to claim 31, wherein said second processing unit further includes an occlusion determining unit which judges the presence of occlusion in the second key frame, and computes the corresponding point that corresponds to the point of interest layer by layer if it is determined that there exists a visible region in the first key frame while there exists an invisible region in the second key frame due to occlusion.
33. Apparatus according to claim 32, wherein said second processing unit generates the intermediate frame, for each of the layers, based on a positional relation between the point of interest and the corresponding point, and synthesizes the intermediate frame generated for each of the layers by taking overlap in a depth direction into account.
34. Apparatus according to claim 31, wherein said second processing unit generates the intermediate frame, for each of the layers, based on a positional relation between the point of interest and the corresponding point, and synthesizes the intermediate frame generated for each of the layers by taking overlap in a depth direction into account.
35. Apparatus according to claim 31, wherein said corresponding-point computing unit computes a first corresponding point that corresponds to the point of interest in a first key frame that contains the three-dimensional object model, and computes a second corresponding point that corresponds to the point of interest in a second key frame that contains the three-dimensional object model; and said second processing unit further comprises:
a recording unit which records the first corresponding point and the second corresponding point in an associated manner,
wherein said second processing unit generates the intermediate frame based on a positional relation between the first corresponding point and the second corresponding point.
36. Apparatus according to claim 35, wherein said corresponding-point computing unit computes the first corresponding point and the second corresponding point that correspond to the point of interest layer by layer if it is determined that there exists a visible region in the first key frame while there exists an invisible region in the second key frame due to occlusion.
37. Apparatus according to claim 36, wherein said second processing unit generates the intermediate frame, for each of the layers, based on a positional relation between the first corresponding point and the second corresponding point, and synthesizes the intermediate frame generated for each of the layers by taking overlap in a depth direction into account.
38. Apparatus according to claim 35, further including an occlusion determining unit which judges the presence of occlusion in the first and second key frames, wherein said corresponding-point computing unit computes the first corresponding point and the second corresponding point that correspond to the point of interest layer by layer if it is judged that there exists an invisible region in at least one image frame of the first and second key frames due to occlusion.
39. Apparatus according to claim 38, wherein said second processing unit generates the intermediate frame, for each of the layers, based on a positional relation between the first corresponding point and the second corresponding point, and synthesizes the intermediate frame generated for each of the layers by taking overlap in a depth direction into account.
Description
This application is a continuation of international application number PCT/JP01/07807, filed Sep. 7, 2001, and now abandoned.
FIELD OF THE INVENTION
The present invention relates to pseudo-three-dimensional image generating techniques. It particularly relates to method and apparatus for generating a pseudo-three-dimensional image where a three-dimensional object model is drawn.
DESCRIPTION OF THE RELATED ART
Conventionally, the CG (computer graphics) technology, together with SFX and such other techniques, has been used mainly in SF movies. In Hollywood movies today, for instance, CG images are used in certain scenes of a large number of them irrespective of their genre. Also, the software of various games enjoyed at home cannot show their worth without CG technology, and CG has penetrated deeply into not only children's but also adults' daily lives. Recently efforts are being made to structure home network systems using the hardware, which have come into wide use at home as game machines, as its core, and it is certain that such CG-based user interfaces will become a familiar feature in our daily scenes.
As the use of CG spreads, there arise growing demands for more detailed and realistic images. In the case of the above-mentioned game machines, the drawing speed of the core CPU has already achieved the order of million polygons per second. Though it may be mere game machines, the growth of their CPU power is remarkable.
The dilemma of CG suppliers lies in the fact that once fine and detailed images are shown, the users take them for granted and escalate their demands gradually. Even the above-mentioned speed on the order of million polygons per second is not enough for the users who have gotten accustomed to the amazing images of CG movies. By the time a newly developed CPU is put on the market, the user demand is already ahead of it. And this is repeated endlessly.
SUMMARY OF THE INVENTION
The present invention has been made in view of foregoing circumstances and, therefore, an object of the present invention is to provide a technology capable of generating and drawing a larger amount of CG images at relatively light computation load.
A preferred embodiment according to the present invention relates to a pseudo-three-dimensional image generating method. This method includes: assigning a point of interest to a first CG image that contains a three-dimensional object model; and computing a corresponding point that corresponds to the point of interest in a second CG image that contains the three-dimensional object model, wherein the computing derives the corresponding point in such a manner that image coordinates of the point of interest serve as processing starting information. The computing may be such that the corresponding point that corresponds to the point of interest is calculated by referring to the three-dimensional object model.
For example, when an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation Mi has been performed is denoted by Qi, the point of interest is denoted by p and the corresponding point is denoted by p′, the computing may be such that
p′=Q 2 M 2 M 1 −1 Q 1 −1 p (Equation 1)
is calculated so as to derive the point of interest. Since the operation Mi is expressed here as an absolute operation, the operation is of a fixed form for each or each set of the images. However, an operation to be added to the three-dimensional object model in the three-dimensional space, in the event of moving from the i-th CG image toward the (i+1)-th CG image, may of course be expressed in the form of a relative operation such as Mi,i+1. Since both operations are essentially the same, the absolute expression which is simpler in notation will be used throughout.
This method may further include generating an intermediate image of the firs CG image and the second CG image by interpolating image positions of the point of interest and the corresponding point.
Normally, when three-dimensional CG images are to be generated, an arbitrary three-dimensional coordinate transformation in the three-dimensional space, that is, the above-described Mi, is first performed on a three-dimensional object model (hereinafter also referred to as “object”) and thereafter a view volume defined by a predetermined viewpoint is projected onto the plane, that is, the above-described Qi is performed so as to obtain images. An example of the Mi is an affine transformation including rotation, translation, dilation and contraction. If the object is expressed by polygon,
QiMis
is computed for each of vertices of the polygon. In the case of moving pictures, a similar processing is repeated for each of image frames, so that the computation load to draw each frame becomes heavy if the number of polygon is large. Thus, the degree of attainable fineness of an image that can be drawn in real time is limited.
According to the present invention, on the other hand, when the number of frames to be drawn in a second is, for example, 30 and those image frames are denoted by Ii (i=1, 2, . . . , 30), I1 and I30 are rendered by the conventional method and these I1 and I30 are regarded as the first CG image and the second DG image, respectively. Then a corresponding point, in I30, of a point of interest set in I1 is computed, and matching between the two images is computed by repeating this processing for a plurality of points of interest, so that intermediate frames I2 through I29 can be virtually generated by interpolating the positional relations between those points of interest and the corresponding points. Therefore, if the conventional rendering is to be done in this case, the rendering needs to be performed on only two frames out of 30 frames. Thus, even taking the computation by the equation 1 necessary for computing the corresponding points into consideration, the total computation load can be significantly reduced. Hereinafter, the image frames which are rendered by a method similar to the conventional method, or more generally, the image frames on which the interpolation will be based will be called “key frames.”
There may exist a single or plural invisible region(s) due to occlusion, in the CG image where a plurality of objects are drawn. The computing a corresponding point may include determining occlusion. If it is determined that the corresponding point is invisible in the second CG image due to occlusion, data indicating that there does not exist a point on the second CG image to be corresponded to the point of interest may be outputted, or the corresponding point may be corresponded to the point of interest on the assumption that the corresponding point virtually exists on the second CG image.
What is meant by “virtually exists” includes a case when an image corresponding to an invisible region is virtually generated in a manner such that an image of an occlusion region is generated from the original three-dimensional object model and is then pasted on the second CG image, or such that an image corresponding to the occlusion region is cut out and pasted on the second CG image.
In order to cope with a problem of occlusion, a CG image may be given a layered structure in a depth direction. The CG image having this layered structure may be an image such that a plurality of images, whose depths in the depth direction with an object being viewed from a single viewpoint differ, are multiplexed. The method according to this embodiment may further include determining occlusion in the second CG image, wherein the computing a corresponding point may be such that the three-dimensional object model is divided into layers according to depth, and the corresponding point that corresponds to the point of interest is computed for each of the layers if it is determined that there exists a visible region in the first CG image while there exists an invisible region in the second CG image due to occlusion.
This method may further include generating an intermediate image of the first CG image and the second CG image for each of the layers by interpolating image positions of the point of interest and the corresponding point, and synthesizing the intermediate image generated for each of the layers by taking overlap in a depth direction into account.
Another preferred embodiment according to the present invention relates also to a pseudo-three-dimensional image generating method. This method includes: assigning a point of interest to a three-dimensional object model; computing a first corresponding point which corresponds to the point of interest in a first CG image that contains the three-dimensional object model; computing a second corresponding point which corresponds to the point of interest in a second CG image that contains the three-dimensional object model; and recording the first corresponding point and the second corresponding point in an associated manner.
When an operation by which the three-dimensional object model is operated inside a three-dimensional space in order to obtain an i-th CG image (i being a natural number) is denoted by Mi, an operation to obtain the i-th CG image by projecting the three-dimensional object model onto a two-dimensional plane after the operation Mi has been performed is denoted by Qi, the point of interest is denoted by p and the corresponding point is denoted by p′, the computing a first corresponding point may be such that p1=Q1M1p is calculated, and the computing a second corresponding point may be such that p2=Q2M2p is calculated, and the recording may be such that at least a pair of data item, (p1, p2), are recorded.
Utilizing, for example, the pair of the data item, the method may further include generating an intermediate image of the first CG image and the second CG image by interpolating image positions of the first corresponding point and the second corresponding point. In this embodiment, too, the key frames are suppressed to part of the whole frames, so that computation load can be markedly reduced.
A method according to this embodiment may further include judging the presence of occlusion in the first and second CG images, wherein the computing a corresponding point may be such that the three-dimensional object model is divided into layers according to depth, and the first corresponding point and the second corresponding point that correspond to the point of interest are computed layer by layer if it is judged that there exists an invisible region in at least one of the first and second CG images due to occlusion.
This method may further include generating an intermediate image of the firs CG image and the second CG image for each of the layers by interpolating image positions of the first corresponding point and the second corresponding point, and synthesizing the intermediate image generated for each of the layers by taking overlap in a depth direction into account.
Still another preferred embodiment according to the present invention relates also to a pseudo-three-dimensional image generating method. This method includes: assigning a point of interest to a first CG image that contains a three-dimensional object model; and computing a corresponding point that corresponds to the point of interest in a second CG image that contains the three-dimensional object model. The computing may be such that an image coordinate of the point of interest is utilized as processing starting information, the first CG image and the second CG mage are respectively divided into layers according to depth, and the corresponding point is derived by taking the layer, to which the point of interest and the corresponding point belong, into account as a condition. The computing may be such that the corresponding point that corresponds to the point of interest is computed on condition that both the point of interest and the corresponding point belong to a same layer.
Still another preferred embodiment according to the present invention relates also a pseudo-three-dimensional image generating method. This method includes: assigning a point of interest to an object model; computing a first corresponding point that corresponds to the point of interest in a first CG image that contains the three-dimensional object model; computing a second corresponding point that corresponds to the point of interest in a second CG image that contains the three-dimensional object model; and recoding the first corresponding point and the second corresponding point in an associated manner. The recording in an associated manner may be such that the first CG image and the second CG mage are respectively divided into layers according to depth, and the first corresponding point is associated with the second corresponding point based on a condition of a layer or layers that the first corresponding point and the second corresponding point belong to. The recording may be such that the first corresponding point and the second corresponding point are recorded in an associated manner on condition that both the first corresponding point and the second corresponding point belong to a same layer.
Still another preferred embodiment according to the present invention relates also a pseudo-three-dimensional image generating method. This method includes: acquiring a first image and a second image which have depth information; and specifying a conversion rule of an image coordinate of an object contained in the first image and the second image. The specifying is such that, upon acquiring information on how to operate the object in a three dimensional space, for example, information on the above-described Mi, the conversion rule is obtained by taking that information into account.
The specifying may be such that the conversion rule is obtained, for example, in the form of QiMi, by combining information in the three dimensional space, such as Mi, and information on an operation, such as Qi, by which the object is projected from the three dimensional space onto image planes of the first image and the second image.
This embodiment may further include: computing a corresponding point in the second image that corresponds to a point of interest provided in the first image, based on the conversion rule; and generating an intermediate image of the first image and the second image by interpolating image positions of the point of interest and the corresponding point.
Here, the first and second images include photographed images obtained by a stereo camera or the other devices that can acquire depth information. If not for CG images, the concept of the rendering is not required. However, here too, the first and second images are the key frames utilized to generate intermediate images. In this embodiment, too, the corresponding points are obtained through the manipulation of the object in the three-dimensional space, so that they are generally grasped with high accuracy. As a result thereof, relatively reliable and accurate intermediate images can be obtained from a small amount of key frames, thus realizing a desirable data compression.
Still another preferred embodiment according to the present invention relates also a pseudo-three-dimensional image generating method. This method renders, by CG, moving pictures that contain a three-dimensional object model, and executes a first rendering and a second rendering in a combined manner. The first rendering draws key frames selected from a plurality of image frames that constitute the moving pictures by using a direct method that copes with description of the three-dimensional object model, the second rendering draws intermediate frames by interpolating the key frames, and the moving pictures are expressed by the key frames and the intermediate frames.
For example, if the object is described by a polygon, the first rendering is such that executed is a general rendering that draws the polygon. Though there are other various expressions such as a function or volume besides the polygon, the first rendering is such that the key frames are rendered by a method applied naturally to those expressions. On the other hand, the second rendering is such that the intermediate frames are generated or drawn by performing interpolation on among the key frames.
Still another preferred embodiment according to the present invention relates to a pseudo-three-dimensional image generating apparatus. This apparatus, which renders, by CG, moving pictures that contain a three-dimensional object model, includes: a first processing unit which draws key frames selected from a plurality of image frames that constitute the moving pictures, by using a direct method that copes with description of the three-dimensional object model; and a second processing unit which generates intermediate images by interpolating the key frames.
The processing unit may include: a point-of-interest setting unit which assigns a point of interest to a first key frame that contains the three-dimensional object model; and a corresponding-point computing unit which computes a corresponding point that corresponds to the point of interest in a second key frame that contains the three-dimensional object model, in such a manner that image coordinates of the point of interest serve as processing starting information, wherein the intermediate frame may be generated based on a positional relation between the point of interest and the corresponding point. The corresponding-point computing unit may divide the three-dimensional object model into layers according to depth, and may compute the corresponding point that corresponds to the point of interest layer by layer. The corresponding-point computing unit may respectively divide the first and second key frames into layers according to depth, and may compute the corresponding point based on a condition of a layer or layers that the first corresponding point and the second corresponding point belong to.
The second processing unit further includes an occlusion determining unit which judges the presence of occlusion in the second key frame, and the corresponding-point computing unit may divide the three-dimensional object model into layers according to depth and may compute the corresponding point that corresponds to the point of interest layer by layer if it is determined that there exists a visible region in the first key frame while there exists an invisible region in the second key frame due to occlusion. Moreover, during a process in which the corresponding-point computing unit computes the corresponding point, the occlusion determining unit judges whether the corresponding point lies in the occlusion region or not. If it is judged by the occlusion determining unit that the corresponding point exists in the occlusion region, the corresponding-point computing unit may set layer information to the correspondence relation between the point of interest and the corresponding point. The second processing unit may generate the intermediate frame, for each of the layers, based on a positional relation between the point of interest and the corresponding point and may synthesizes the intermediate frame generated for each of the layers by taking overlap in a depth direction into account.
As another structure, the second processing unit may include: a point-of-interest setting unit which assigns a point of interest to the three-dimensional object model; a corresponding-point computing unit which computes a first corresponding point that corresponds to the point of interest in a first key frame that contains the three-dimensional object model, and computes a second corresponding point that corresponds to the point of interest in a second key frame that contains the three-dimensional object model; and a recording unit which records the first corresponding point and the second corresponding point in an associated manner, wherein the second processing unit may generate the intermediate frame based on a positional relation between the first corresponding point and the second corresponding point.
Still another preferred embodiment according to the present invention relates to a pseudo-three-dimensional image generating method. This method is a method by which to generate moving pictures that contain a three-dimensional object model, and includes: selecting predetermined frames as, key frames among image frames constituting the moving pictures; turning the key frames into a CG image by utilizing three-dimensional information on the three-dimensional object model; and generating an intermediate frame which should exist between the key frames by computing image matching among the key frames.
It is to be noted that any arbitrary combination of the above-described elements and their expressions changed between a method, an apparatus, a recording medium, computer program and so forth are also encompassed by the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The above-described objects and the other objects, features and advantages will become more apparent from the following preferred embodiments taken with the accompanying drawings in which:
FIG. 1 shows a structure of a pseudo-three-dimensional image generating apparatus according to a first embodiment.
FIG. 2 is a schematic illustration showing operations for drawing an object in a first key frame and a second key frame.
FIG. 3 is a flowchart showing a procedure for generating a pseudo-three-dimensional image according to the first embodiment.
FIG. 4 shows conceptually a processing for generating an intermediate frame by interpolation.
FIG. 5 is a flowchart showing a procedure, according to the first embodiment, for generating a pseudo-three-dimensional image from a processing starting point different from that shown in FIG. 3.
FIG. 6 shows a structure of a pseudo-three-dimensional image generating apparatus according to a second embodiment of the present invention.
FIG. 7 is a schematic illustration showing operations for drawing two objects on a first key frame and a second key frame.
FIG. 8 is a flowchart showing a procedure for generating a pseudo-three-dimensional image according to the second embodiment.
FIG. 9 is schematic illustrations showing a procedure for generating an intermediate frame by the use of a layered structure.
FIG. 10 is a flowchart showing a procedure for generating a pseudo-three-dimensional image from a processing starting point different from that shown in FIG. 8.
FIG. 11 is a flowchart showing an example of a modified procedure for generating a pseudo-three-dimensional image.
DETAILED DESCRIPTION OF THE INVENTION
One of the features of an embodiment is a fusion of CG technology and natural picture processing technology. According to a general technique of CG, a three-dimensional object model (hereinafter referred to as “object”) can be turned into a CG image if a three-dimensional form of the object and the posture thereof in a three-dimensional space are specified and an image plane onto which it is to be projected is determined. With moving pictures, too, the motion of an object can be expressed completely by performing the above-described process on each of the frames. However, computation load to express a multiplicity of polygons in real time is heavy.
On the other hand, as an image processing mainly for natural pictures, a technique has been conventionally known in which matching between two images is computed and an intermediate image is obtained by interpolating the corresponding points. Although image matching is generally applied to the structuring of three-dimensional data from an image or the image recognition, it can also be utilized effectively in the compression of moving pictures by generating intermediate images through interpolation. Nevertheless, matching can involve incorrect correspondence, and the intermediate images produced often have problems with the image quality.
The present embodiment fuses together the advantages of CG and the advantages of image matching and lightens the shortcomings thereof. In CG, the position of an object in each frame is identified perfectly, so that there is normally no room for such a “hazy” concept as interpolation to enter. Image matching, on the other hand, is applicable only to natural pictures or photographed images and is basically incompatible with the artificial concept of CG.
Despite these preconditions, the present embodiment is such that key frames only are turned into images using the three-dimensional information on an object, and intermediate frames are generated by image matching. Utilized is the understanding of the present inventors that the information on the coordinates within the image of an object drawn in a key frame rendered by CG is perfectly defined in a process of CG processing and therefore a corresponding point in a second key frame that corresponds to a point of interest set in a first key frame can be specified perfectly. That is, as matching between key frames can be obtained without error, the image quality of intermediate frames obtained by interpolation becomes very high. At the same time, computation load becomes light because the rendering of intermediate frames can be accomplished by interpolation computation only.
FIG. 1 shows a structure of a pseudo-three-dimensional image generating apparatus 10 according to a first embodiment. In terms of hardware, this structure can be realized by a CPU, a memory and other LSIs of an arbitrary computer. In terms of software, it is realized by memory-loaded programs or the like having a function of generating pseudo-three-dimensional images, but drawn and described here are functional blocks that are realized in cooperation with those. Thus, it is understood by the skilled in the art that these functional blocks can be realized in a variety of forms by hardware only, software only or the combination thereof.
The pseudo-three-dimensional image generating apparatus 10 includes a first processing unit 12, a second processing unit 14, and a buffer memory 30 that stores those outputs temporarily and performs a synchronization processing on them. The first processing unit 12 includes a key frame setting unit 16 that sets key frames when drawing an object as moving pictures, and a key frame drawing unit 18 that actually renders the set key frames by a CG processing. Now, suppose that two key frames, namely, first and second key frames, are to be drawn.
The second processing unit 14 includes a point-of-interest setting unit 20 that sets a point of interest relative to the first key frame, a corresponding-point computing unit 22 that computes a corresponding point in the second key frame that corresponds to the point of interest, a recording unit 24 that stores the point-of-interest point and the corresponding point in a manner that associates them, an intermediate frame generator 26 that generates intermediate frames by computing an interpolation based on the associated information, and an intermediate frame drawing unit 28 that renders the generated intermediate frames. The rendered key frames and intermediate frames are stored in a buffer memory 30 once, aligned into a correct sequence of frames and outputted to a display apparatus (not shown).
FIG. 2 shows an outline of coordinate computation for drawing an object OBJ, which is described by three-dimensional coordinate data, in each of a first key frame KF1 and a second key frame KF2. Here, the following notation is used:
• Mi: Operation from three dimensions to two dimensions, wherein an object OBJ is manipulated within a three-dimensional space, in order to obtain an i-th key frame.
• Qi: Operation from three dimensions to three dimensions to obtain the i-th key frame by projecting the object OBJ onto a two-dimensional plane, after the operation Mi.
• p1: A point of interest set in the first key frame KF1.
• p2: A corresponding point that corresponds to p1 obtained in the second key frame KF2.
• p: A point on the original object OBJ corresponding to p1 and p2.
If Mi is a manipulation in a world coordinate system, then Qi may be understood as a conversion of it into the coordinate system within a camera image plane. The former may also be expressed as a model view matrix, and the latter as a projection matrix. Although the latter has been stated to be a conversion from three dimensions to two dimensions, the CG processing is generally such that in addition to two-dimensional coordinates (x, y) on the screen, there remain the depths d of their respective points as attributes. Namely, data on each of the points on the screen are stored in the form of (xa, ya, da).
According to this notation, the equation by which to derive p2 from p1 is:
P 2 =fp 1 =Q 2 M 2 M 1 −1 Q 1 −1 p 1 (Equation 2)
Here, an operation f can be said to specify the conversion rule for the coordinates within an image of the object OBJ between the first key frame KF1 and the second key frame KF2.
FIG. 3 is a flowchart showing a processing done by the pseudo-three-dimensional image generating apparatus 10. The concept underlying here is that the starting point of processing is the first key frame KF1 and a point of interest is set on the image. The case where the point of interest is placed on an object OBJ will be described later with reference to FIG. 5.
Prior to the start of the processing in FIG. 3, key frames are set for the moving pictures to be generated by the key frame setting unit 16. It is supposed here that 30 image frames are to be generated, and the key frame setting unit 16, for instance, sets the first frame thereof as a first key frame KF1 and the last frame thereof as a second key frame KF2. Key frames can be selected in various ways, automatically or manually, that is, the key frames can be placed or selected at every predetermined number of frames or whenever a scene change takes place, or frames specified by a user may be selected as key frames.
In a similar manner, it is supposed that prior to the processing of FIG. 3, selected key frames have already been subjected to a rendering by a key frame rendering unit 18, using a conventionally general CG processing, and the resulting drawing data are stored in the buffer memory 30.
Under these assumptions, as shown in FIG. 3, the point-of-interest setting unit 20 first sets a point of interest on the first key frame KF1 (S10). The point of interest may be set at the vertex or the like of a polygon, may be set one for each polygon, or may be set at fixed intervals in the x and y direction on the first key frame KF1. The greater the number of points of interest, the finer the result will be, but the number thereof should be chosen in consideration of computation load.
Then a corresponding point on the second key frame KF2 corresponding to the above-mentioned point of interest is computed by the corresponding-point computing unit 22 (S12). If one point of interest is p1 as in FIG. 2, the corresponding point p2 thereof is derived using the above-described equation 2.
Corresponding points are computed for all the points of interest initially set (Y of S14). Upon completion thereof (N of S14), the correspondence relationship, namely, all the pairs of (p1, p2), is recorded as coordinate data in the recording unit 24 (S16). Moreover, at this time, arbitrary attribute data other than coordinates, which these points have, may be converted into numerical values and stored. The attribute data converted into numerical values can be subjected to an interpolation processing later. Colors are taken as an example here. That is, color data c1 for p1 and color data c2 for p2 are stored.
Then the intermediate frame generator 26 generates intermediate frames by performing an interpolation computation based on the stored correspondence relations (S18), and an intermediate frame drawing unit 28 renders the generated intermediate frames and outputs the resulting data to the buffer memory 30.
FIG. 4 shows an interpolation processing. Here, a first key frame KF1 and a second key frame KF2, which are superposed on each other, are shown in the top of the figure, and the positional relationship between the point-of-interest p1 and the corresponding point p2 is shown clearly. Although an arbitrary nonlinear interpolation may be used for interpolation, a linear interpolation or a linear homotopy is used here for the sake of simplicity. The point-of-interest p1 and the corresponding point p2 are connected by a line segment, which is interior-divided at a ratio of u:(1−u). If the display times for the first key frame KF1 and the second key frame KF2 are denoted by t1 and t2, respectively, then the intermediate frame IM at display time tu=ut2+(1−u)t1 can be obtained by the above-mentioned interior division. Strictly speaking, the position of a corresponding point on the intermediate frame IM corresponding to the point p on the object OBJ is obtained. The intermediate frame IM is completed by performing this interior division processing for all the pairs of point-of-interest and corresponding point.
Since an intermediate frame at arbitrary time between the first key frame KF1 and the second key frame KF2 can be generated by repeating the above-described processing by moving the above-mentioned parameter u in the range of [0, 1], a finite number of intermediate frames are generally obtained by moving u stepwise in [0, 1]. At this time, the color is also interpolated to obtain smooth moving pictures, using
C u =uc 2+(1−u)c 1 (Equation 3)
The equation 3 is also valid for attributes other than position and color. It is to be noted that if u is allowed to take values other than [0, 1], exterior division, instead of internal division, may be realized. That is, frames before the first key frame KF1 or frames after the second key frame KF2 can be generated. Such frames are also called intermediate frames.
In the buffer memory 30, rendering data of key frames and rendering data of intermediate frames are stored at the point of completion of S18. These are rearranged in the order of frames, synchronized at predetermined output timing, and outputted to the display apparatus (S20). Thus completes a series of processing.
FIG. 5 is a flowchart showing another type of processing done by the pseudo-three-dimensional image generating apparatus 10. Although, in FIG. 3, the first key frame KF1 is the starting point of processing, the object OBJ is the starting point of processing here. The processing starts with the setting of a point-of-interest p on the object OBJ by the point-of-interest setting unit 20 (S30). The processings by the key frame setting unit 16 and the key frame drawing unit 18 are the same as those in FIG. 3. Then the corresponding points are computed for the first key frame KF1 and the second key frame KF2, respectively (S32, S34). The former is called a first corresponding point p1, and the latter a second corresponding point p2. They can be derived by
P 1 =Q 1 M 1 p
P 2 =Q 2 M 2 p
Hereafter the first corresponding points and the second corresponding points are computed until all the points of interest having been set on the object OBJ are gone (S14). Once the correspondence relationship is grasped, the subsequent steps of processing S16 to S20 are the same as in FIG. 3.
In the first embodiment described above, key frames in consideration have been CG images. However, the present invention is applicable to other types of images as well. A good example thereof is images in general that have depth information. The reason is that the equations for operation used in the processings described thus far can generally be applied to image data which have not only the two-dimensional coordinates of x and y but also depth information d. Hence, similar intermediate images can be generated for images obtained by stereo cameras or infrared or other range sensors and for ordinary two-dimensional images to which depth information is artificially added. The depth information may be held not only in pixel units but also in units of area formed by a certain set of pixels. Moreover, while the generation of moving pictures has been described in the above embodiment, still pictures with images from different viewpoints than the key frame can be obtained.
Concerning the interpretation of the embodiments, a number of notes will be added hereinbelow. The parts stating that key frames are obtained by rendering by a conventional method have two meanings. That is, one of them means that key frames are rendered by a conventional method at the time of drawing thereof, and the other means that images having been rendered by a conventional method are stored as key frames and then the images thereof are displayed. The present patent specifications employ statements like “rendering key frames by the conventional method” for both cases.
The interior division at a ratio of u:(1−u) is used in generating an intermediate frame. In doing so, however, it is not necessary that the interior division is done in a linear manner or that the interpolation is such that the sum is always 1 (one).
Next, a second embodiment according to the present invention will be described. The second embodiment differs from the first embodiment in that an intermediate frame is generated by obtaining the correspondence relationship between key frames while taking into account the problem of occlusion, in which an object hidden behind the other object is not visible. The description of the structure and operation that are in common with the first embodiment will be omitted, and the structure and operation that differ from the first embodiment will be explained hereinafter.
Generally when there is an occlusion, the generation of an intermediate image by forcibly deriving a correspondence relationship between key frames will result in a forcible correspondence to a false corresponding point despite the fact that there actually is no corresponding point in the region of occlusion, thus producing an unnatural intermediate image. Therefore, in this second embodiment, the image of an object is given a layer structure in a depth direction and the correspondence relationship is computed layer by layer. In generating intermediate frames, an intermediate frame is generated for each layer and a final intermediate frame is generated by synthesizing the intermediate frames for the respective layers while taking their overlap in the depth direction into account.
In this second embodiment, as with the first embodiment, a three-dimensional model of an object can be utilized, so that it is possible to identify in the key frame the region where an occlusion is taking place and to divide the region of occlusion into layers in the depth direction. For the region in the key frame where no occlusion is taking place, an intermediate frame can be generated by the method described in the first embodiment without dividing into layers. In this second embodiment, however, if no occlusion takes place in the region, it is taken that there exists only a single layer in this region, and thus the processing is uniformly done so that this single layer occurrence is included in the case where the region is divided into a plurality of layers.
FIG. 6 shows a structure of a pseudo-three-dimensional image generating apparatus 10 according to the second embodiment of the present invention. Unlike the first embodiment, the second processing unit includes an occlusion determining unit 19. The occlusion determining unit 19 determines whether there is an occlusion region in a key frame or not. The point-of-interest setting unit 20 and the corresponding-point computing unit 22 divide the occlusion region into layers, set points of interest thereon, and derive the correspondence relations thereof. The intermediate frame generator 26 generates an virtual intermediate frame for each of the layers and generates a final intermediate frame by synthesizing the intermediate frames of the layers.
FIG. 7 is a schematic illustration showing operations for drawing two three-dimensional objects OBJ1 and OBJ2 on a first key frame KF1 and a second key frame KF2. Here, points p1 and p2 are the point of interest and the corresponding point, respectively, relative to the first object OBJ1, whereas points q1 and q2 are the point of interest and the corresponding point, respectively, relative to the second object OBJ2. Moreover, point p is a point on the original first object OBJ1 corresponding to p1 and p2, whereas point q is a point on the original second object OBJ2 corresponding to q1 and q2. The other notation is the same as in the first embodiment.
In this example of FIG. 7, in the second key frame KF2, part of the first object OBJ1 is hidden behind the second object OBJ2 and as a result not visible. Hence, the corresponding point p2 that corresponds to the point of interest p1 on the first object OBJ1 does not exist on the second key frame KF2.
FIG. 8 is a flowchart showing a procedure for generating a pseudo-three-dimensional image according to the second embodiment. The procedure will be explained, using the example shown in FIG. 7. The occlusion determining unit 19 determines whether or not there is any region which is visible in the first key frame KF1 but is not visible in the second key frame KF2 due to the presence of an occlusion (S40). If there exists any occlusion region (Y of S40), the occlusion region is divided into layers in the depth direction (S42). In the example of FIG. 7, there is an occlusion in a part of the first object OBJ1 caused by the second object OBJ2, so that the part is divided into two layers L1 and L2, and the first object OBJ1 is associated with the first layer L1 and the second object OBJ2 is associated with the second layer L2.
Next, layers are set for deriving correspondence relations (S44). First, the first layer L1 is set. At subsequent steps S10 to S18, the same processing as in the first embodiment shown in FIG. 3 are performed for the first layer L1, and the correspondence relations at the first layer are derived, thus generating an intermediate frame at this layer. Then it is determined whether or not there is any layer left whose correspondence relationship is to be checked (S46). Since the second layer is remaining (Y of S46), the process is returned to step S44, and, in a similar manner, the correspondence relation are derived and an intermediate frame is generated for the second layer.
FIG. 9 is schematic illustrations showing a procedure for generating an intermediate frame by the use of a layered structure. In first key frames, divided into a first layer L1 and a second layer L2, KF1-L1 and KF1-L2, the first object OBJ1 and the second object OBJ2 are drawn, respectively. In a similar manner, in second key frames, divided into a first layer L1 and a second layer L2, KF2-L1 and KF2-L2, the first object OBJ1 and the second object OBJ2 are drawn, respectively.
First, for the first layer L1, the point-of-interest setting unit 20 sets a point-of-interest p1 on the first key frame KF1-L1, and the corresponding-point computing unit 22 computes a corresponding point p2 on the second key frame KF2-L1. The intermediate frame generator 26 generates an intermediate frame IM-L1 of the first object OBJ1 by an interpolation computation using the correspondence relationship obtained for the first layer L1. In a similar manner, for the second layer L2, a corresponding point q2 that corresponds to the point-of-interest q1 is computed, and an intermediate frame IM-L2 of the second object OBJ2 is generated based on the correspondence relationship obtained for the second layer L2.
Referring back to FIG. 8, upon completion of processings for all the layers (N of S46), the intermediate frame generator 26 synthesizes the intermediate frames IM-L1 and IM-L2 obtained for the respective layers while taking their overlap in the depth direction into account, and generates a final intermediate frame IM (S48). When synthesizing the images of the first layer L1 and the second layer L2, a synthesis taking an overlap into account is possible, where there is a region with an overlap, by overwriting the image of the first layer L1 by the image of the second layer L2. In a buffer memory 30, rendering data of the synthesized intermediate frame are stored together with rendering data of the key frames, and they are rearranged in the proper order of frames, synchronized at predetermined output timing, and outputted (S50).
FIG. 10 is a flowchart showing a procedure for generating a pseudo-three-dimensional image in which a point on a three-dimensional object is the starting point of a processing. The occlusion determining unit 19 determines whether or not there is any invisible region due to an occlusion in at least one of the frames, namely, the first and the second key frame KF1 and KF2 (S40). If there is any occlusion region, the occlusion region is, in a manner similar to the case of FIG. 8, divided into layers in the depth direction, and the correspondence relation is derived and an intermediate frame is generated fro each of the layers.
For the processing of each layer, the processing procedure designated by the same reference numerals and notation as in the first embodiment shown in FIG. 5 applies. For example, for the second layer L2, a point-of-interest q is set on the second object OBJ2 by the point-of-interest setting unit 20, and a first corresponding point q1 on the first key frame KF1-L2 for the second layer L2 and a second corresponding point q2 on the second key frame KF2-L2 for the second layer L2 are computed by the corresponding-point computing unit 22. The first and second corresponding points q1 and q2 are associated with each other at the second layer L2. Using the correspondence relationship thus obtained, the intermediate frame generator 26 generates an intermediate frame at the second layer L2.
The subsequent processing of synthesizing the intermediate frames at the respective layers (S48) and the processing of synchronization (S50) are the same as those in FIG. 8.
An image that has a layered structure as used in the present embodiment has a data structure in which a plurality of images having different depths in the depth direction are multiplexed, for instance, so that each of the pixels of an image may have a plurality of values depending on the depths. These multiplex data are obtained by successively sampling the surface of a three-dimensional object as seen from a certain viewpoint successively by changing the depth in the depth direction. Thus, when an image is taken out at a layer of a certain depth, the occlusion region becomes visible because, even if there is an invisible region due to an occlusion, an object closer to you is no longer present when the image at a layer deeper than that is taken out. The data structure in which multiplexed are a plurality of images having different depths in the depth direction when seen from a certain viewpoint like this is called an LDI (Layered Depth Image).
In the above explanation, an occlusion region of a key frame is divided into layers when there is any occlusion. However, if the image format has a data structure of a plurality of layered images having different depths in the depth direction, like an LDI, in the first place, then the processing of determining the presence of an occlusion may be omitted and the generation of an intermediate frame for each layer may be started from the layered structure. Moreover, in the present embodiment where an original three-dimensional object model can be utilized, the image of a key frame, even when it does not originally have a structure of an LDI, can be reshaped into an LDI structure as required. In doing so, it is not necessary that all the region of the image be turned into a layered structure, but it suffices that the objects related to the occlusion region only are to have the layered structure.
Some examples of modification based on the present embodiment will be described hereinafter. In the above explanation, the problem of occlusion is taken care of by the use of a layered structure, but there are a number of methods for dealing with occlusion without the use of the layered structure. Where there is an occlusion region, a corresponding point that corresponds to a point of interest is absent in an image. At this time, the corresponding-point computing unit 22 may output an error code indicating the absence of a corresponding point. A warning unit may further be provided whereby the user is instructed at an error to manually perform an interpolation for the point of interest which has no corresponding point.
Moreover, where there is a region which is visible in one of the first key frame and the second key frame but is not visible due to an occlusion in the other thereof, an image processor may further be provided whereby the image of the invisible region due to the occlusion is extracted from the image of the frame which is visible, and the corresponding-point computing unit 22 may compute the correspondence relationship between the key frames by utilizing the extracted image as an virtual image for the occlusion region. Furthermore, this image processor may generate the image of the invisible region due to the occlusion, from an original three-dimensional object model.
Moreover, the above-mentioned LDI structure may be used as the input format of images on the assumption that data, such as color information of the occlusion region, are already included in the image file. Data on the occlusion region like this can be extracted with relative ease by using a conventional program, such as the ray tracing method or the z-buffer method, for rendering three-dimensional CG. When the user fails to provide data on an occlusion region when providing image data, it is also possible to estimate the data on the occlusion region through the use of information on the key frame adjacent to the key frame which includes the occlusion region and information on the corresponding point thereof.
Moreover, even where a layered structure of images is assumed, it is not necessary that the computation of correspondence relationship and the generation of an intermediate frame be performed for each layer after the division into layers, but, instead, the correspondence relationship may be computed by adding layers as a condition at the point when it is found that the correspondence relation cannot be derived from the provided image due to an occlusion. Namely, when the corresponding-point computing unit 22 computes a corresponding point for the point of interest set by the point-of-interest setting unit 20, the occlusion determining unit 19 judges each time whether the corresponding point belongs to the occlusion region or not. When it is judged that the corresponding point belongs to the occlusion region, the corresponding-point computing unit 22 assigns a layered structure to the correspondence relationship and obtains a correspondence relationship under a constraint that both the point of interest and the corresponding point belong to the same layer. The flowchart of this example of modification is shown in FIG. 11.
Different from the case shown in FIG. 8, the occlusion determining unit 19, in the process where the corresponding-point computing unit 22 computes a corresponding point in the second key frame KF2, judges whether the corresponding point belongs to the occlusion region in the second key frame KF2 or not (S40). When it is judged that the corresponding point belongs to the occlusion region (Y of S40), the corresponding-point computing unit 22 sets layer information for the correspondence relationship (S42). For example, in the case where the point-of-interest setting unit 20 sets a point-of-interest p1 in the first key frame KF1 and the corresponding-point computing unit 22 computes a corresponding point p2 on the second key frame KF2, it is determined by the occlusion determining unit 19 that the corresponding point p2 belongs to the occlusion region in the second key frame KF2, and the layer information indicating that the point-of-interest p1 and the corresponding point p2 belong to the second layer L2 is recorded together with the correspondence relationship of these points. The intermediate frame generator 26, when generating an intermediate frame by an interpolation computation from the correspondence relationship, generates an intermediate frame for each layer using this layer information.
A similar modification may be possible with the procedure for generating a pseudo-three-dimensional image in which a point on the three-dimensional object serves as the starting point of processing as shown in FIG. 10. In a process where the corresponding-point computing unit 22 computes a first corresponding point in a first key frame and a second corresponding point in a second key frame, the occlusion determining unit 19 determines whether the first and the second corresponding points belong to the occlusion region in their respective key frames. Where at least one of the corresponding points belong to the occlusion region, the corresponding-point computing unit 22 sets layer information for the correspondence relation of the first and the second corresponding points.
In the second embodiment, a layer structure of images is utilized in processing the problem of occlusion, and it does not necessarily require as a precondition a data structure hierarchized according to the physical distance in the depth direction. A layer structure of images is acceptable so long as the objects related to the region where an occlusion occurs are hierarchized necessarily and sufficiently to eliminate the occlusion. By limiting the division into layers to the region where occlusion occurs, the cost for computing the correspondence relationship can be suppressed to a low level, and the amount of information necessary to store the correspondence relationship can be reduced.
In the above description, the operation Mi is performed on a three-dimensional object, but the Mi, as a three-dimensional coordinate transformation operator, may be applicable not only in object units but also in pixel units of an image or in vertex units of a polygon.
The present invention has been described based on embodiments. According to these embodiments, key frames only are rendered and intermediate frames can be generated with relative ease by interpolation computation using the correspondence relationship between key frames, so that the rendering time can be reduced markedly when compared with a general rendering technique for CG motion pictures where all the frames are rendered. For example, if a key frame is placed at a rate of one for every five frames, then the rendering time may be made nearly five times shorter, thus enabling high-speed rendering of moving pictures.
Moreover, according to these embodiments, intermediate frames are generated from the correspondence relationship between key frames only, which has nothing to do with rendering techniques. Thus, the process may be applied to the ray tracing method, radiosity method, photon mapping method and other methods, and can be implemented into a variety of rendering software.
Moreover, according to these embodiments, intermediate frames are generated at the time of drawing, so that the intermediate frames can be excluded and the CG motion pictures can be recorded in a reduced amount of data on the key frames and the correspondence relationship, thus making efficient compression of CG motion pictures possible. Moreover, according to the second embodiment, information on the layered structure is also recorded and the correspondence relationship taking occlusion into account is stored. Thus, moving pictures generated from the compressed CG motion pictures are free from errors due to occlusion, so that high-quality images are displayed.
As has been described, the present invention can be utilized for method, apparatus, system and program for generating pseudo-three-dimensional images where a three-dimensional object mode is drawn.
Although the present invention has been described by way of exemplary embodiments, it should be understood that many changes and substitutions may be made by those skilled in the art without departing from the scope of the present invention which is defined by the appended claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US574228916 Aug 199621 Apr 1998Lucent Technologies Inc.System and method of generating compressed video graphics images
US5969772 *30 Oct 199719 Oct 1999Nec CorporationDetection of moving objects in video data by block matching to derive a region motion vector
US6414685 *8 Aug 19972 Jul 2002Sharp Kabushiki KaishaMethod of processing animation by interpolation between key frames with small data quantity
US6834081 *19 Oct 200121 Dec 2004Samsung Electronics Co., Ltd.Coding apparatus and method for orientation interpolator node
Non-Patent Citations
Reference
1International Search Report for International Patent Application No. PCT/JP01/07807, ISA: Japanese Patent Office, Dec. 13, 2001.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7365875 *7 May 200329 Apr 2008Canon Kabushiki KaishaImage processing apparatus, image processing method, program, and recording medium
US792918228 Jun 200719 Apr 2011Canon Kabushiki KaishaImage processing apparatus, image processing method, program, and recording medium
US861919828 Apr 200931 Dec 2013Lucasfilm Entertainment Company Ltd.Adjusting frame rates for video applications
US20080309756 *17 Nov 200618 Dec 2008Koninklijke Philips Electronics, N.V.Rendering Views for a Multi-View Display Device
US20110216833 *16 Oct 20098 Sep 2011Nokia CorporationSharing of motion vector in 3d video coding
Classifications
U.S. Classification345/421, 375/240, 348/402.1
International ClassificationG06T13/20, G06T19/00, G06T3/00, G06T1/00
Cooperative ClassificationG06T15/005, G06T13/00, G06T2210/44
European ClassificationG06T15/00A, G06T13/00
Legal Events
DateCodeEventDescription
11 Jul 2003ASAssignment
Owner name: MONOLITH CO., LTD., JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HIRAGA, MASAKI;HABUKA, KENSUKE;REEL/FRAME:014263/0851
Effective date: 20030606
9 Jun 2008ASAssignment
Owner name: JBF PARTNERS, INC., JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MONOLITH CO., LTD;REEL/FRAME:021064/0279
Effective date: 20080328
1 Sep 2009FPAYFee payment
Year of fee payment: 4
18 Oct 2013REMIMaintenance fee reminder mailed
7 Mar 2014LAPSLapse for failure to pay maintenance fees
29 Apr 2014FPExpired due to failure to pay maintenance fee
Effective date: 20140307 | __label__pos | 0.970206 |
Я хотел бы проверить в нескольких таблицах, что одинаковые ключи / одинаковое количество ключей присутствуют в каждой из таблиц.
В настоящее время я создал решение, которое проверяет количество ключей для каждой отдельной таблицы, проверяет количество ключей, когда все таблицы объединяются вместе, а затем сравнивает.
Это решение работает, но мне интересно, есть ли более оптимальное решение ...
Пример решения в его нынешнем виде:
SELECT COUNT(DISTINCT variable) AS num_ids FROM table_a;
SELECT COUNT(DISTINCT variable) AS num_ids FROM table_b;
SELECT COUNT(DISTINCT variable) AS num_ids FROM table_c;
SELECT COUNT(DISTINCT a.variable) AS num_ids
FROM (SELECT DISTINCT VARIABLE FROM table_a) a
INNER JOIN (SELECT DISTINCT VARIABLE FROM table_b) b ON a.variable = b.variable
INNER JOIN (SELECT DISTINCT VARIABLE FROM table_c) c ON a.variable = c.variable;
ОБНОВИТЬ:
Сложность, с которой я столкнулся, объединяя это в один запрос, заключается в том, что любая из таблиц может не быть уникальной в ПЕРЕМЕННОЙ, которую я хочу проверить, поэтому мне пришлось использовать отдельный перед слиянием, чтобы избежать расширения объединения.
2
Sam Gilbert 24 Дек 2015 в 14:42
2 ответа
Лучший ответ
Поскольку мы только подсчитываем, думаю, нет необходимости объединять таблицы в столбце variable. UNION должно быть достаточно. Нам по-прежнему приходится использовать DISTINCT, чтобы игнорировать / подавлять дубликаты, что часто означает дополнительную сортировку. Индекс на variable должен помочь получить счетчики для отдельных таблиц, но не поможет получить счетчик объединенной таблицы.
Вот пример сравнения двух таблиц:
WITH
CTE_A
AS
(
SELECT COUNT(DISTINCT variable) AS CountA
FROM TableA
)
,CTE_B
AS
(
SELECT COUNT(DISTINCT variable) AS CountB
FROM TableB
)
,CTE_AB
AS
(
SELECT COUNT(DISTINCT variable) AS CountAB
FROM
(
SELECT variable
FROM TableA
UNION ALL
-- sic! use ALL here to avoid sort when merging two tables
-- there should be only one distinct sort for the outer `COUNT`
SELECT variable
FROM TableB
) AS AB
)
SELECT
CASE WHEN CountA = CountAB AND CountB = CountAB
THEN 'same' ELSE 'different' END AS ResultAB
FROM
CTE_A
CROSS JOIN CTE_B
CROSS JOIN CTE_AB
;
Три стола:
WITH
CTE_A
AS
(
SELECT COUNT(DISTINCT variable) AS CountA
FROM TableA
)
,CTE_B
AS
(
SELECT COUNT(DISTINCT variable) AS CountB
FROM TableB
)
,CTE_C
AS
(
SELECT COUNT(DISTINCT variable) AS CountC
FROM TableC
)
,CTE_ABC
AS
(
SELECT COUNT(DISTINCT variable) AS CountABC
FROM
(
SELECT variable
FROM TableA
UNION ALL
-- sic! use ALL here to avoid sort when merging two tables
-- there should be only one distinct sort for the outer `COUNT`
SELECT variable
FROM TableB
UNION ALL
-- sic! use ALL here to avoid sort when merging two tables
-- there should be only one distinct sort for the outer `COUNT`
SELECT variable
FROM TableC
) AS AB
)
SELECT
CASE WHEN CountA = CountABC AND CountB = CountABC AND CountC = CountABC
THEN 'same' ELSE 'different' END AS ResultABC
FROM
CTE_A
CROSS JOIN CTE_B
CROSS JOIN CTE_C
CROSS JOIN CTE_ABC
;
Я сознательно выбрал CTE, потому что, насколько мне известно, Postgres материализует CTE, а в нашем случае каждый CTE будет иметь только одну строку.
Использование array_agg с заказом по - еще лучший вариант, если есть на красное смещение. Вам все равно нужно будет использовать DISTINCT, но вам не нужно объединять все таблицы вместе.
WITH
CTE_A
AS
(
SELECT array_agg(DISTINCT variable ORDER BY variable) AS A
FROM TableA
)
,CTE_B
AS
(
SELECT array_agg(DISTINCT variable ORDER BY variable) AS B
FROM TableB
)
,CTE_C
AS
(
SELECT array_agg(DISTINCT variable ORDER BY variable) AS C
FROM TableC
)
SELECT
CASE WHEN A = B AND B = C
THEN 'same' ELSE 'different' END AS ResultABC
FROM
CTE_A
CROSS JOIN CTE_B
CROSS JOIN CTE_C
;
2
Vladimir Baranov 9 Мар 2016 в 11:54
Что ж, это, вероятно, самая неприятная часть SQL, которую я мог построить для вас :) Я всегда буду отрицать, что написал это и что моя учетная запись stackoverflow была взломана;)
SELECT
'All OK'
WHERE
( SELECT COUNT(DISTINCT id) FROM table_a ) = ( SELECT COUNT(DISTINCT id) FROM table_b )
AND ( SELECT COUNT(DISTINCT id) FROM table_b ) = ( SELECT COUNT(DISTINCT id) FROM table_c )
Кстати, это не оптимизирует запрос - он по-прежнему выполняет три запроса (но я думаю, что лучше, чем четыре?).
Поскольку мы только считаем, думаю, нет необходимости объединять таблицы в столбце {{X0}}. {{X1}} должно быть достаточно. Нам все еще нужно использовать {{X2}} для игнорирования / подавления дубликатов, что часто означает дополнительную сортировку. Индекс на {{X3}} должен помочь получить счетчики для отдельных таблиц, но не поможет получить счет в объединенной таблице.
SELECT DISTINCT
tbl_a.a_count,
tbl_b.b_count,
tbl_c.c_count
FROM
( SELECT COUNT(id) a_count, array_agg(id order by id) ids FROM table_a) tbl_a,
( SELECT COUNT(id) b_count, array_agg(id order by id) ids FROM table_b) tbl_b,
( SELECT COUNT(id) c_count, array_agg(id order by id) ids FROM table_c) tbl_c
WHERE
tbl_a.ids = tbl_b.ids
AND tbl_b.ids = tbl_c.ids
Вышеупомянутый запрос вернется только в том случае, если все таблицы имеют одинаковое количество строк, гарантируя, что IDS также совпадают.
0
Trent 8 Мар 2016 в 21:06 | __label__pos | 0.643564 |
Questions tagged [arima]
Refers to the AutoRegressive Integrated Moving Average model used in time series modeling both for data description and for forecasting. This model generalizes the ARMA model by including a term for differencing, which is useful for removing trends and handling some types of non-stationarity.
Filter by
Sorted by
Tagged with
1
vote
0answers
12 views
What will be the value of AR and MA order? [duplicate]
I want to determine the value of p and q of ARIMA model from the ACF and PACF plot given bellow. What will be the order and why?
0
votes
0answers
19 views
Start and End time of time series objects [closed]
Im having some issues regarding calculation of my forecast accuracy. I think it's because I can't figure out how to specify the start of my time series. It's daily data of sold units. The data is ...
15
votes
3answers
816 views
Confused about Autoregressive AR(1) process
I create an autoregressive process "from scratch" and I set the stochastic part (noise) equal to 0. In R: ...
0
votes
0answers
21 views
Approximate ARMA model with high order AR model using AIC
There are many sources on why a "low-order" ARMA$(p,q)$ model (with small but non-zero $p,q$) can be expressed, theoretically, as an AR$(\infty)$ model (or an MA$(\infty)$ as well). For example this ...
0
votes
0answers
6 views
How to write out the model of a logarithmic SARIMA model
I have a model log(Z)~ARIMA(1,0,1)x(0,1,2) s=12, where Z is the series. In regular cases I would use the following method to write out the model. So in my case it would be $(1-\phi_1 B)(1-B^{12})...
0
votes
0answers
16 views
a general approach to derive state space representation from ARMA?
I see that the likes of this question has been asked many times but I'm just wondering whether there is a general approach to write ARMA models in state space representation? I have an exam in a few ...
0
votes
0answers
13 views
I need help to define which process this is [closed]
Which process is this? I already know that it's not an MA $σ_{t}^2= 0.01+0.7 ε_{t−1}^2$ connected with this process $y_{t}=0.5+0.5y_{t−1}+σ_tε_t$ thank you for your help
1
vote
1answer
38 views
Out-of-sample Rolling window forecast with ARIMA(0,0,0) with non-zero mean
I am doing a rolling window out-of-sample forecast and have fitted an ARIMA(0,0,1) model to a first difference time series. People argue that sometimes simpler models are better than more complicated ...
0
votes
1answer
22 views
Tentative ARIMA models for forecasting
I am doing out-of-sample forecasting with ARIMA and derived one model (0,0,1) with auto.arima on a differenced time series. The series is daily observations over the course of 3 years. I would like to ...
0
votes
0answers
33 views
ARIMA forecasting interpretation [closed]
Doing an out-of-sample forecast on sales data in r with an ARMA(0,0,1) model. The MA(1) coefficient is significant but has a value of -0.97 which is really close to the stationarity restriction. But ...
0
votes
0answers
18 views
Comparison of ARIMA and VAR accuracy
Can someone help explaining how to compare a forecast from an ARIMA model and a VAR model. I have tried calculating MAPE, MSE, RMSE etc. for my VAR forecast, but i simply cannot get it to work. ...
1
vote
1answer
28 views
Neural network vs SARIMA
In real-time data, sometimes you find that you cannot get a certain seasonality for the data because it is difficult to identify. This happens a lot in the prices of commodities and the stock market ...
0
votes
1answer
19 views
Choosing the train data size automatically - Sarimax Time Series Model
I am working on a forecasting application on some credit data. The flow of money looks as follows: I am using a sarimax model since I have weekly seasonality. For ...
0
votes
0answers
42 views
AIC vs. p-values for coefficients of an ARIMA model
Do the p values associated with ARIMA coefficients have any significance attached to them particularly when they are small? To be precise, can it happen that for an ARIMA(2,0,0) model the lag 2 ...
0
votes
1answer
11 views
Interpretation of the ACF of standardised residuals vs actual residuals
Is there any scientific reason why a lot of studies and packages choose the ACF plot of the standardised residuals rather than the residuals themselves?
0
votes
1answer
19 views
Are the forecasting methods like mean, naive, drift, weighted average applicable to non stationary time series?
Like AR, MA models essentially need the series to be stationary, do the other forecast methods mentioned above also follow stationary?
2
votes
3answers
36 views
What methods of forecasting should I be looking at to forecast sales?
I am wanting to forecast sales of different products within a business. I have a good background in mathematics (but mainly focused on analysis, group theory, algebra etc. as opposed to statistics). ...
0
votes
1answer
28 views
Does it make sense to fit an ARIMA model to the remainder component of a timeseries?
Suppose I have a timeseries, something like this: ...
2
votes
2answers
66 views
Why TBATS model giving poor result?
I have time series data of number of units ordered from a manufacturing plant and number of units delivered. The are multiple different plant sites for which I need to build forecasting models. I ...
2
votes
1answer
94 views
Time Series Forecasting - Daily data
I'm relatively new to time series forecasting. I've been assigned with the task of forecasting operation time of an industrial equipment based on a daily data (3 years of daily data). The prediction ...
1
vote
0answers
30 views
Guessing ARMA order just from the plot
I have this two plots, each one contains two realisations (orange/blue) of the same $ARMA(pi, qi)$ model. All orange instances share the same noise sequence $e_i$, and so do all the blue ones. I don'...
0
votes
1answer
50 views
forecast rainfall using ARIMA in R
I am a new student approaching ARIMA prediction analysis in R. If the question is too simple or incorrect, please forgive and guide me. I am currently using the ARIMA provided in R. I use the data as ...
1
vote
1answer
68 views
Help me about using ARIMA forecasting rainfall [closed]
I am currently using the ARIMA provided in R. I use the data as the rainfall time series in QuyNhon (Vietnam) from 2000 to 2017 to forecast rainfall for the next several years. I wish that the ...
0
votes
0answers
22 views
Negative coefficients of regressors in arimax,should be positive
I have two years of daily time series inbound call centers data starting from Jan 2018 to Nov 2019. I am doing arimax and regressors are mainly promotional flag along with day of the week (sun, mon ......
0
votes
1answer
32 views
ARIMA model for a single variable with hidden context
I have a signal which measures the power of a machine. I have been asked to fit an ARIMA model for this signal in order to find anomalies. However as far as I know, the power of the machine is ...
0
votes
0answers
19 views
Interpreting ARMA models?
After much searching I was able to picture in my mind an ARMA model with this analogy: AR representing the sales of a given item. MA representing a coupon given for the item. This analogy is given ...
1
vote
0answers
67 views
Kalman filter for AR(1) plus noise
I am working the following AR(1) plus noise state-space model $$ z_{t} = x_{t} + v_{t}\\ x_{t} = \phi x_{t-1} + c + w_{t} $$ Therefore, the transition matrix is $[\phi]$, the observation matrix is $[1]...
0
votes
0answers
12 views
Mann-Kendall Test of a trend
From the construction of Mann-Kendall Test, I conclude that in the absence of a trend, the data is supposed to be i.i.d. Therefore, one can not use Mann-Kendall Test to test trend in ARMA models. Is ...
0
votes
0answers
17 views
Forecasting method for different cohorts with large seasonal swings but otherwise stable data
I am attempting to forecast percentage of churn for different cohorts. However, I am unsure how to proceed after selecting an initial method. The churn is fairly stable except for large seasonal ...
1
vote
1answer
26 views
Inconsistent Ljung-Box test result and plot of autocorrelation function of residuals
I get an inconsistent result for the Ljung-Box test: in fact when I run it using the Box.test function it doesn't make me reject the null hypothesis of residuals being white noise, but when I plot the ...
0
votes
0answers
14 views
Forecast evaluation in ar model
I have to compare in R 3 autoregressive models I've previously identified and estimated.The comparison should be based on an out-of-sample prediction: using (T-R, in my case 216) observations to ...
3
votes
1answer
74 views
Parameter space restriction in random walk + noise model
Suppose we have a random walk + noise model so \begin{align} y_t & = \mu_{t-1} + \epsilon_t\\ \mu_t & = \mu_{t-1} + \eta_t \end{align} Then, it's straightforward to show that $$\...
0
votes
1answer
23 views
Estimate of an AR model
I have this part of a project which states this: Once you choose the best three models for each series (according to AIC, the PACF and ACF, and "from general to specific), the next step is to estimate ...
2
votes
3answers
83 views
Difference between MA and AR
I fail to see the difference between Moving Average (MA): $x_t=\epsilon_t+β_1\epsilon_{t−1}+…+β_q\epsilon_{t−q}$ Autoregressive (AR): $x_t=\epsilon_t+β_1x_{t−1}+…+β_qx_{t−q}$ $x_t$ is ...
0
votes
0answers
25 views
Fitting the 'intercept only' regression model in Python
I am working on a project that includes time series forecasting and I decided to use ARIMA for this. Before determining p (AR order) and q (MA order), I need to run ADF test to determine the ...
2
votes
1answer
29 views
Anomaly detection using vector autoregression
I want to detect anomalies in multivariate time series using statistical approaches. In particular. I want to use a vector autoregression model like VAR, VARMA or VARIMA, to predict a time stamp $x_t$ ...
0
votes
0answers
24 views
Do these forecasts imply the ARIMA model is misspecified?
I have a time series of a stock return over more than 2 years. It's stationary (Augmented Dickey-Fuller test is significant). The plot looks like this: The ACF and PACF look like this: I think these ...
1
vote
1answer
70 views
How can I remove trend of model's forecast when I use ARIMA model?
I have to forecast future energy consumption. I decided to use ARIMA model. But my model's forecast shows the wrong trend. The blue line shows true value. And the orange line shows my model's forecast....
0
votes
1answer
42 views
Cointegration in ARIMAX regressions in R?
I’m running some ARIMA(X) regressions in R with several (control-) regressors including dummy variables and have some general questions concerning possibly cointegrated variables in ARIMA regressions. ...
3
votes
1answer
52 views
How to interpret the constant for an ARMA model
I'm trying to fit an ARMA(1,0) model for a timeseries that start at $10$ and drops slowly to $4$ in around $180$ steps. For this, I've tried to fit an ARMA model in python using the following: ...
0
votes
0answers
6 views
What are the necessary conditions so that the residuals of an ARIMA model behave as a normal distribution?
Simple and direct question I just had while implementing a solution: What are the necessary conditions so that the residuals of an ARIMA model behave as a normal distribution?
3
votes
1answer
141 views
Why is time series forecasting different for each software?
I have 2 different software programs: SPSS, and Statgraphics. I am using them for time series forecasting but Each one gives different arima parameters when using the auto ARIMA model, and The ...
1
vote
1answer
61 views
Differencing of AR(1) process
Let $z_{t}$ be stationary ARMA(p,q) (not ARIMA!) process. What would be the distribution of differencing of $z_{t}$? I mean the process $y_{t} = z_{t} - z_{t-1}$. My attempt: Let $z_{t}$ be ...
1
vote
1answer
64 views
I cannot understand formula for exogenous options in statsmodels' ARIMA
I need to use exogenous variables for my time series forecasting. And I found that I can include my exogenous variables into my ARIMA model using exogenous option. I want to know how this option ...
1
vote
0answers
54 views
Can I use ARIMA with hour data for two year prediction? [closed]
I am trying to use ARIMA model for time series forecasting. My data consists of hour by hour energy consumption. I have data for one year. So I have total 24*365 observations for energy consumption. ...
2
votes
1answer
40 views
95% prediction interval for an ARMA(2,2) model
What would the formula for a 95% prediction interval for an ARMA(2,2) model be? The specific model I am using is: an ARIMA(2,0,2) with non-zero mean, with the following parameter estimates: ...
0
votes
1answer
34 views
What is the correct model (AR, MA, or ARMA) and order for the data?
I am new to time series and forecasting and I have been assigned to determine the model and order for a data object. The ACF, PACF, and EACF are below: I was thinking it was an AR(1), but I am not ...
1
vote
1answer
42 views
Variance of AR(1) plus noise and its “equivalent” ARMA(1,1)
Let us consider the following state-space model $$ z_{t} = x_{t} + v_{t}\\ x_{t} = \phi x_{t-1} + w_{t} $$ where $ \phi< 1$, the errors $v_{t}\sim \mathcal{N}(0,V^{2})$ and $w_{t}\sim \mathcal{N}(0,...
2
votes
0answers
15 views
when fitting a regression model to a time-series, can I use lagged values of the time-series itself?
I'm fitting a regression model $y_t$ to a time series $x_t$ (not a dynamic model involving ARMA terms!). I saw that useful predictors to put in my model are $t$, seasonality variables and lagged ...
1
vote
1answer
15 views
when fitting a dynamic regression model to a TS, what would happen if we first fit a regression model and then fit an ARMA?
When fitting a dynamic regression model, we fit a model that has exogenous variables and also ARMA variables. What would happen if we first fit a regression of all exogenous variables, and then fit ... | __label__pos | 0.533157 |
Mirror
Disabling an event handler once it has executed once (Views: 32)
Problem/Question/Abstract:
Disabling an event handler once it has executed once
Answer:
Have you ever wanted to keep an event from firing once it has executed once? Simply set the event handler method to nil in the body of the method. For instance, let's say you want to disable an OnClick for a button once the user has pressed it. Here's the code to do that:
procedure Button1OnClick;
begin
Button1.OnClick := nil;
end;
<< Back to main page | __label__pos | 0.78225 |
1
$\begingroup$
In Spivak’s Calculus on Manifolds, pg. 69, he claims that, if we define $T = Dg(a)$, then $(T^{-1}\circ g)’ (a) = I$, where $I$ is the identity. Using the inverse function theorem, I am getting
$(T^{-1}\circ g)’ (a) = (T^{-1}\circ T’)^{-1})\circ g(a) \cdot g’(a)$,
after which I get
$(T’)^{-1}\circ T \circ g(a) \cdot g’(a)$.
I believe $(T’)^{-1}$ is the identity, considering this $T$ is a constant matrix, but I can’t make anymore progress.
$\endgroup$
1 Answer 1
1
$\begingroup$
$T$ is a fixed linear map. The derivative of any linear map is itself (at any point of the vector space). Go back to the definition if you're not sure about this. You should have $$(T^{-1})'(g(a))\cdot g'(a) = T^{-1}\cdot g'(a) = (g'(a))^{-1}\cdot g'(a) = I.$$
$\endgroup$
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged . | __label__pos | 0.974835 |
Django vs. Laravel vs. Trailblazer
Get help choosing one of these Get news updates about these tools
Django
Laravel
Trailblazer
Favorites
224
Favorites
223
Favorites
4
Hacker News, Reddit, Stack Overflow Stats
• 2.49K
• 4.12K
• 164K
• 516
• 2.36K
• 81.6K
• -
• 68
• 48
GitHub Stats
Description
What is Django?
Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.
What is Laravel?
Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable, creative experience to be truly fulfilling. Laravel attempts to take the pain out of development by easing common tasks used in the majority of web projects, such as authentication, routing, sessions, and caching.
What is Trailblazer?
Trailblazer is a thin layer on top of Rails. It gently enforces encapsulation, an intuitive code structure and gives you an object-oriented architecture. In a nutshell: Trailblazer makes you write logicless models that purely act as data objects, don't contain callbacks, nested attributes, validations or domain logic. It removes bulky controllers and strong_parameters by supplying additional layers to hold that code and completely replaces helpers.
Pros about this tool
Why do you like Django?
Why do you like Laravel?
Why do you like Trailblazer?
Cons about this tool
Pricing
Trailblazer Pricing
Customers
Integrations
Latest News
Django security releases issued: 2.0.2 and 1.11.10
2017 Malcolm Tredinnick Memorial Prize awarded to Cl...
The DSF Welcomes Carlton Gibson as its Newest Fellow
Laravel Zero 5.6 is Now Released
LaraStream – A Laravel live stream community
Building a Vue SPA with Laravel Part 3
Interest Over Time
Get help choosing one of these | __label__pos | 0.96539 |
1: /*
2: * Copyright (c) 1980 Regents of the University of California.
3: * All rights reserved. The Berkeley software License Agreement
4: * specifies the terms and conditions for redistribution.
5: *
6: * @(#)inquire.c 5.2 7/30/85
7: */
8:
9: /*
10: * inquire.c - f77 i/o inquire statement routine
11: */
12:
13: #include "fio.h"
14:
15: f_inqu(a) inlist *a;
16: { char *byfile;
17: int i;
18: int exist;
19: unit *p;
20: char buf[256], *s;
21: long x_inode;
22:
23: elist = NO;
24: lfname = a->infile;
25: lunit = a->inunit;
26: external = YES;
27: p = NULL;
28: if(byfile=a->infile)
29: {
30: g_char(a->infile,a->infilen,buf);
31: if((x_inode=inode(buf))==-1)
32: { exist = NO; /* file doesn't exist */
33: }
34: else
35: { exist = YES; /* file does exist */
36: for(i=0;i<MXUNIT;i++)
37: if(units[i].ufd && (units[i].uinode==x_inode))
38: {
39: p = &units[i];
40: break;
41: }
42: }
43: }
44: else
45: {
46: if (not_legal(lunit))
47: { exist = NO; /* unit doesn't exist */
48: }
49: else
50: { exist = YES;
51: if (units[lunit].ufd)
52: { p= &units[lunit];
53: lfname = p->ufnm;
54: }
55: }
56: }
57: if(a->inex) *a->inex = exist;
58: if(a->inopen) *a->inopen=(p!=NULL);
59: if(a->innum) *a->innum = byfile?(p?(p-units):-1):lunit;
60: if(a->innamed) *a->innamed= (byfile || (p && p->ufnm));
61: if(a->inname)
62: {
63: if(byfile) s = buf;
64: else if(p && p->ufnm) s = p->ufnm;
65: else s="";
66: b_char(s,a->inname,a->innamlen);
67: }
68: if(a->inacc)
69: {
70: if(!p) s = "unknown";
71: else if(p->url) s = "direct";
72: else s = "sequential";
73: b_char(s,a->inacc,a->inacclen);
74: }
75: if(a->inseq)
76: {
77: if(!p) s = "unknown";
78: else s = (p && !p->url)? "yes" : "no";
79: b_char(s,a->inseq,a->inseqlen);
80: }
81: if(a->indir)
82: {
83: if(!p) s = "unknown";
84: else s = (p && p->useek && p->url)? "yes" : "no";
85: b_char(s,a->indir,a->indirlen);
86: }
87: if(a->inform)
88: { if(p)
89: {
90: #ifndef KOSHER
91: if(p->uprnt) s = "print"; /*** NOT STANDARD FORTRAN ***/
92: else
93: #endif
94: s = p->ufmt?"formatted":"unformatted";
95: }
96: else s = "unknown";
97: b_char(s,a->inform,a->informlen);
98: }
99: if(a->infmt)
100: {
101: if (p) s= p->ufmt? "yes" : "no";
102: else s= "unknown";
103: b_char(s,a->infmt,a->infmtlen);
104: }
105: if(a->inunf)
106: {
107: if (p) s= p->ufmt? "no" : "yes";
108: else s= "unknown";
109: b_char(s,a->inunf,a->inunflen);
110: }
111: if(a->inrecl) *a->inrecl = p ? p->url : -1;
112: if(a->innrec) {
113: if(p && p->url)
114: *a->innrec = ((ftell(p->ufd) + p->url - 1)/p->url) + 1;
115: else
116: *a->innrec = -1;
117: }
118: if(a->inblank)
119: {
120: if( p && p->ufmt)
121: s = p->ublnk ? "zero" : "null" ;
122: else
123: s = "unknown";
124: b_char(s,a->inblank,a->inblanklen);
125: }
126: return(OK);
127: }
Defined functions
f_inqu defined in line 15; never used
Last modified: 1985-07-31
Generated: 2016-12-26
Generated by src2html V0.67
page hit count: 1023
Valid CSS Valid XHTML 1.0 Strict | __label__pos | 0.999975 |
It-essentials v7.0 Chapter 8 Exam Answers
It-essentials v7.0 Chapter 8 Exam Answers
1. Match the common printer configuration options to the correct descriptions. (Not all options are used.)
It-essentials v7 Chapter 8 Exam Answers q1
It-essentials v7 Chapter 8 Exam Answers q1
2. What are two cables that are used to connect a computer to a printer? (Choose two.)
Serial*
FireWire*
PS/2
HDMI
eSATA
Explanation:
Wiring a PC to a printer can be done through the following connections: serial, parallel (IEEE 1284 and SCSI), USB, Firewire (IEEE 1394) and Ethernet. PS/2, HDMI, and eSATA connections are used for other purposes.
3. What are two functions of a print server? (Choose two.)
provide print resources to all connected client computers*
store print jobs in a queue until the printer is ready*
ensure that the connected client computers have up-to-date printer drivers
store backups of documents sent to the printer
provide uninterrupted power to the printer
Explanation:
A print server can provide client computers access to print resources, manage print jobs by storing them in a queue until the print device is ready for them, and provide print job status messages to users.
4. What are two methods to connect to a printer wirelessly? (Choose two.)
IEEE 802.11 standards*
Bluetooth*
WiMax
satellite
microwave
Explanation:
Wireless printers can use Bluetooth, 802.11x, or infrared interfaces to connect wirelessly. WiMax, satellite, and microwave radio technologies are in practice never used to connect a printer to a network.
5. What are two probable causes for printer paper jams? (Choose two.)
high humidity*
the wrong type of paper*
a defective print cartridge
misaligned print heads
an incorrect print driver
Explanation:
Humidity can cause sheets of paper to stick together which could cause paper jams. Using the wrong type of paper for a printer might also create a jamming issue.
6. Which factor affects the speed of an inkjet printer?
the desired quality of the image*
the size of printer power supply
the quality of the paper
the cost of the inkjet cartridges
Explanation:
The speed of an inkjet printer is determined mostly by the make and model of the printer, the quality of printing, and the complexity of the image. The quality of the paper, the cost of the cartridges, and the size of the power supply do not affect the speed of printing.
7. What is a characteristic of thermal inkjet nozzles?
The heat creates a bubble of steam in the chamber.*
Heat is applied to the ink reservoir of each nozzle.
The vibration of the crystal controls the flow of ink.
A charge is applied to the printhead.
Explanation:
Thermal inkjet nozzles work based on heat creating steam bubbles. Piezoelectric inkjet nozzles work based on vibration of crystals.
8. In laser printing, what is the name of the process of applying toner to the latent image on the drum?
Developing*
charging
transferring
fusing
Explanation:
The toner is applied to the latent image on the drum during developing. The toner attached to the latent image is transferred to the paper during transferring. The toner transferred is melted to the paper during fusing, and the drum gets a uniform negative charge during charging.
9. What is the purpose of the Additional Drivers button in the Sharing tab of the Printer Properties?
to add additional drivers for other operating systems*
to add additional drivers for other printers in the network
to add additional drivers for duplex printing
to add additional drivers for other printers connected to the computer
Explanation:
The Additional Drivers button loads drivers in the sharing computer for other operating systems that the client computers may be running. For other printers in the network, the drivers must be loaded in the computers sharing the other printers. There is no need for additional drivers for duplex printing. The required drivers for other printers connected to the sharing computer will be loaded automatically.
10. A Windows 7 computer has several printers configured in the Control Panel Devices and Printers window. Which printer will the computer choose to be the first option for printing?
the printer that is set as the default printer*
the software-based printer that is used to create PDF files
the software-based printer that is used to create XPS files
a manual selection, which is always needed
Explanation:
The Default Printer option is set globally and it will be the first printer to be chosen to print a job, unless Other Printer is selected in a Per-Document manner. Software-based printers are just instances of different kinds of printers, but for them to be chosen to do the job, they will need to be selected either as a default or per-document printer. There is no need to select a printer every time a printing job is sent.
11. Which type of print server provides the most functions and capabilities?
a dedicated PC print server*
a computer-shared print server
a hardware print server
a print server that is implemented in software
Explanation:
A dedicated PC print server has its own resources for the job and can manage several printers. A computer-shared print server uses its resources for sharing as well as for its own PC tasks. A hardware print server can only manage one printer, and a software print server cannot exist without hardware to run it.
12. A user discovers that an inkjet color printer is printing different colors from those that are shown on the screen. What can be done to solve this problem?
Calibrate the printer.*
Adjust the printer spool.
Replace the fuser.
Replace the drum.
Explanation:
Calibrating the printer will align the heads and help in dosing the ink. The print spool problems are more related to queues. Color inkjet printers do not use drums or fusers.
13. What is a characteristic of global and per-document options in print settings?
Per-document options override global options.*
Global options take precedence over per-document options.
It is not possible to configure per-document options.
It is not possible to configure global options.
Explanation:
Global settings act as default settings, whereas per-document settings are helpful with specialized documents, like letters and spreadsheets.
14. After applying a solution to a printer problem, a technician restarts the printer and prints a test page. Which step of the troubleshooting process is the technician applying?
verifying the solution and system functionality*
identifying the problem
testing a theory to determine the cause of the problem
documenting findings, actions, and outcomes
Explanation:
Testing a printer by restarting it and printing a test page are actions taken within the verification of the solution and system functionality step.
15. A technician recorded that a new fuser roller unit was installed in a laser printer to solve a printing problem. Which step in the troubleshooting process did the technician just perform?
documenting findings, actions, and outcomes*
verifying the solution and system functionality
identifying the problem
testing a theory to determine the cause of the problem
Explanation:
Recording the components and parts used to fix a printer problem is part of the documenting findings, actions, and outcomes step.
16. Which action supports an effective printer preventive maintenance program?
Reset the printer page counters if available.*
Replace laser printer toner at set predetermined time intervals.
Clean inkjet print heads when they stop working.
Disconnect the printer from the power source when it is not in use.
Explanation:
Resetting the page counters will assist in documenting printer usage and planning future maintenance. Cleaning parts only when they are no longer working, replacing consumables at set time intervals regardless of need, and disconnecting the printer from the power when it is not in use are not related to a preventive maintenance program.
17. How can the life of a thermal printer be extended?
Clean the heating element regularly with isopropyl alcohol.*
Clean the inside of the printer with a vacuum cleaner with a HEPA filter.
Keep the paper dry in a low humidity environment.
Wipe the outside of the printer with a damp cloth.
Explanation:
The life of a thermal printer can be extended by cleaning the heating element regularly with isopropyl alcohol. A thermal printer does not use toner, so cleaning with a vacuum cleaner with a HEPA filter is not necessary. The condition of the paper and the cleanliness of the outside of the printer would probably do little to extend the life of the printer.
18. In Windows 8, what must be configured to enable one user to share a USB-connected printer with another user on the same network?
Windows firewall
Windows Defender
IEEE 802.11
File and printer sharing*
virtualization option in BIOS
Explanation:
In both Windows 7 and 8, use the Network and Sharing Center control panel > Change advanced sharing settings to select the Turn on file and printer sharing option.
19. The users on a LAN are reporting that computers respond slowly whenever high resolution photographs are being printed on the color laser printer. What would be the cause of this problem?
The printer does not have enough memory to buffer an entire photograph.*
The printer is not configured for duplex printing.
The paper is not adequate for photograph printing.
The printer is not configured for the proper paper orientation.
Explanation:
Printer memory affects print speed and efficiency. Laser printers use memory to buffer print jobs, capturing them in printer memory while allowing computers to proceed with other work while printing.
20. A technician is installing a printer that will be directly connected to a computer. Why does the technician not connect the printer initially during the installation process?
The printer needs to be configured first.
The OS of the workstation needs to be updated first.
The printer driver might need to be installed first before the printer is connected.*
The Microsoft download site needs to be searched first for the printer driver.
Explanation:
When installing a new printer that is attached directly to a workstation, a technician should read the installation instructions carefully. For some printers, the printer driver should be installed on the computer before the printer is connected.
21. Which statement describes a printer driver?
It is the interface in Windows that identifies a unique printer attached to the workstation.
It is cache inside a printer that stores documents to be printed in a queue.
It is software that converts a document into the format that a printer can understand.*
It is the configuration code that is sent to a printer to identify the operating system of the workstation.
Explanation:
Printer drivers are software programs that convert various media types in a document to a stream of commands in a language that the printer can understand. During a printer installation, Windows links an interface called “printer” with a specific driver that is compatible with the actual printer hardware.
22. What type of connection would be used to connect a printer directly to the network?
Ethernet*
serial
USB
Firewire
Explanation:
To connect a printer directly to the network an Ethernet connection through an RJ-45 interface would be used.
23. What mechanism is used in a laser printer to permanently fuse the toner to the paper?
Heat*
electrical charge
pressure
light
Explanation:
Heat is used during the fusing process in a laser printer to permanently fuse the toner to the paper.
24. What corrective action would a technician take in response to a print spooler error?
reboot the printer
restart the print spooler*
update the printer driver
clean the printer
Explanation:
Print spooler errors can occur when the printer service is stopped or not working properly. The first corrective action is to restart the print spooler. In some cases, it may be necessary to reboot the computer.
25. What corrective action should be taken if a printer is printing faded images?
secure loose printer cables
update the print driver
clean the printer
replace the toner cartridge*
Explanation:
If a printer is printing faded images, it is an indication that either the toner is low or the toner cartridge is defective.
26. What would cause an inkjet printer to fail to print any pages?
The printer software is set to toner save.
The printer ribbon is worn out.
The ink cartridge is empty.*
The printer is loaded with a paper type other than photo paper.
Explanation:
An inkjet printer will commonly fail to print when the inkjet cartridge is empty.
27. A user tells a technician that the printer does not respond to attempts to print a document. The technician attempts to print a document and the printer does not output any pages. The technician notices that the printer LCD display is blank and unlit. What is most likely the problem?
The screen contrast is too low.
The printer is not turned on.*
The printer is out of ink.
The print head is clogged.
Explanation:
If the printer display is displaying nothing and the printer is not attempting to print a document, most likely the printer is turned off.
28. A technician is complaining about the following printer issue: The toner is not fusing to the paper. What are two possible causes of this issue? (Choose two.)
The toner cartridge is defective.*
The paper might be incompatible with the printer.*
A test page was never printed.
The laser printer is emitting too much radiation.
The printer lid has not been closed securely.
29. A librarian is complaining about the following printer issue: My impact printer produces faded or light characters. What are two possible causes of this issue? (Choose two.)
The ribbon may be worn out.*
The ribbon may be damaged.*
The printer does not have enough RAM.
The wrong printer type has been selected.
The laser printer is emitting too much radiation.
30. A teacher is complaining about the following printer issue: The paper is creased after printing. What are two possible causes of this issue? (Choose two.)
The paper-feed tray might not be firmly adjusted against the edges of the printer.*
The paper might be loaded incorrectly.*
The printer has been installed on the wrong port.
The wrong printer type has been selected.
The laser printer is emitting too much radiation.
31. A receptionist is complaining about the following printer issue: The print queue seems to be functioning properly, but the printer does not print. What are two possible causes of this issue? (Choose two.)
There is a bad cable connection.*
The printer has an error such as out of paper, out of toner, or paper jam.*
The print queue is overloaded.
The wrong printer type has been selected.
The laser printer is emitting too much radiation.
32. A photographer is complaining about the following printer issue: The printer issues a “Document failed to print” message. What are two possible causes of this issue? (Choose two.)
A cable is loose or disconnected.*
The printer is no longer shared.*
The paper tray is flimsy.
The wrong printer type has been selected.
The laser printer is emitting too much radiation.
33. A reporter is complaining about the following printer issue: The paper jams when the printer is printing. What are two possible causes of this issue? (Choose two.)
The printer could be dirty.*
The humidity could be high and that causes the paper to stick together.*
The laser printer is emitting too much radiation.
The wrong printer type has been selected.
The printer lid has not been closed securely.
34. A manager is complaining about the following printer issue: The ink printer is printing blank pages. What are two possible causes of this issue? (Choose two.)
The print head is clogged.*
The printer is out of ink or toner.*
The printer is using the wrong cable.
The wrong printer type has been selected.
The laser printer is emitting too much radiation.
35. A technician is complaining about the following printer issue: The print appears faded on the paper. What are two possible causes of this issue? (Choose two.)
The toner cartridge is low.*
The paper might be incompatible with the printer.*
A test page was never printed.
The room temperature is above normal.
The printer is using the wrong cable.
36. A librarian is complaining about the following printer issue: The printer control panel displays no image. What are two possible causes of this issue? (Choose two.)
The contrast of the screen may be set too low.*
The printer is not turned on.*
The room temperature is above normal.
The printer does not have enough RAM.
The printer is using the wrong cable.
37. A teacher is complaining about the following printer issue: The paper is creased after printing. What are two possible causes of this issue? (Choose two.)
The paper-feed tray might not be firmly adjusted against the edges of the printer.*
The paper might be loaded incorrectly.*
Print jobs are being sent to the wrong printer.
The room temperature is above normal.
The printer is using the wrong cable.
38. All documents printed by the laser printer in the branch office have ghost or shadow images appearing on the paper. What should the technician do to resolve the issue?
Replace the drum.*
Configure the printer for duplex printing.
Update the OS.
Update the BIOS.
Check the vibration of the crystals.
39. An office assistant in a law firm is trying to print many large legal documents but is getting “memory overload” error messages from the printer. What should the technician do to resolve the issue?
Add more memory to the printer.*
Replace the pickup roller.
Check the vibration of the crystals.
Install a USB hub.
Connect the printer using wireless.
40. A reporter is trying to print several high resolution photographs but the color laser printer is going extremely slowly. What should the technician do to resolve the issue?
Add more RAM to the printer.*
Replace the pickup roller.
Check the vibration of the crystals.
Install a USB hub.
Connect the printer using wireless.
41. A reporter is trying to print several high resolution photographs but the color laser printer is going extremely slowly. What should the technician do to resolve the issue?
Add more RAM to the printer.*
Rewind the ribbon.
Install a USB hub.
Connect the printer using wireless.
Update the BIOS.
42. A new printer has just been installed deep in a mine. When test pages are printed, the paper constantly jams. What should the technician do to resolve the issue?
Move the printer to a less-humid location.*
Rewind the ribbon.
Install a USB hub.
Connect the printer using wireless.
Update the BIOS.
43. A new printer has just been installed deep in a mine. When test pages are printed, the paper constantly jams. What should the technician do to resolve the issue?
Move the printer to a less-humid location.*
Clean the printer.
Install a USB hub.
Connect the printer using wireless.
Update the BIOS.
44. A user complains that recently every printed document has vertical lines and streaks on the paper. What should the technician do to resolve the issue?
Distribute the toner more evenly within the cartridge.*
Clean the printer.
Install a USB hub.
Connect the printer using wireless.
Update the BIOS.
45. A school is installing a new shared printer but the printer is not showing up on the list of available printers. What should the technician do to resolve the issue?
Manually add the printer IP address.*
Reset the page counter.
Install a USB hub.
Connect the printer using wireless.
Update the BIOS.
46. A school is installing a new shared printer but the printer is not showing up on the list of available printers. What should the technician do to resolve the issue?
Manually add the printer IP address.*
Reset the page counter.
Rewind the ribbon.
Clean the printer.
Check the vibration of the crystals.
47. Employees are complaining that they send print jobs to a newly installed printer but the print jobs never print. What should the technician do to resolve the issue?
The printer is connected to the wrong port. Modify the port.*
Check the vibration of the crystals.
Rewind the ribbon.
Clean the printer.
Reset the page counter.
Leave a Reply
This site uses Akismet to reduce spam. Learn how your comment data is processed. | __label__pos | 0.996564 |
What is the derivative of #sqrt(1/x^3)#?
1 Answer
Apr 25, 2016
#-3/2x^(-5/2)#
Explanation:
The most important thing here is not calculus but algebra. In particular, the properties of exponents.
Note that #sqrt(1/x^3)# is equivalent to #sqrt(x^(-3))# (because #1/a# is equal to #a^(-1)#). Using the property #root(a)(x^b)=x^(b/a)#, #root(2)(x^(-3))=x^(-3/2)#. Our problem now is simply finding the derivative of #x^(-3/2)#, which is done easily using the power rule:
#d/dxx^(-3/2)=-3/2*x^(-3/2-1)=-3/2x^(-5/2)# | __label__pos | 0.999615 |
#include <iostream>
#include <ctime>
using namespace std;
// Dynamic Single Array of integers
#define dAry
class DynaArray {
private:
int SIZE;
int* array;
public:
DynaArray(); // Default Constructor
DynaArray(const DynaArray& dAry); // Copy Constructor
~DynaArray(); // Destructor
DynaArray& operator= (const DynaArray& dAry); // Assignment Operator
// Accessors
int getSize() const;
int getArrayElement (int index) const;
void setArrayElement (int index, int value);
inline void print () const;
// Mutators
void resetSize(int size);
void randomInit();
void create();
void init();
};
void DynaArray::create() {
array = new int [SIZE];
}
void DynaArray::init() {
for (int SIZE = 0; SIZE < SIZE; SIZE++);
DynaArray::DynaArray(); {
SIZE = 0; // set to an empty array
array = NULL; // points nowhere
}
DynaArray::DynaArray ( const DynaArray& dAry); {
this->SIZE = dAry.getSize ( );
create();
init();
// Bitwise-copy each pre-existing array element into this new array
for ( int i = 0; i < SIZE; i++) {
int array[i] = dAry.array[i];
}
}
DynaArray::~DynaArray(); {
delete [] array;
}
// Logical-Copy Assignment Operator
DynaArray& DynaArray::operator = (const DynaArray& dAry) {
this-> SIZE = dAry.get Size ( );
delete [] array;
array = new int [SIZE];
for (int i = 0; i <SIZE; i++)
array [i]=dAry.array[i];
return* this;
}
// Accessor - get size of array
int DynaArray::getSize() const {
return this->SIZE;
}
// Accessor - get an array element
int DynaArray::getArrayElement (int index) const {
// if array index is valid, then return element
if (index >= 0 && index < SIZE)
return array[index];
// else return 0
return 0;
}
// inline function - prints array
void DynaArray::print () const {
// if array is empty, print message
if (SIZE == 0) {
cout << "array empty" << endl;
return;
}
// else print array
for (int i = 0; i < SIZE; i++) {
cout << array[i] << ", ";
}
cout << "\b\b " << endl; // backspace out last comma
}
// Mutator - set an array element
void DynaArray::setArrayElement (int index, int value) {
// if array index is valid, then reset element
if (index >= 0 && index < SIZE)
array[index] = value;
}
// Mutator - Resets the size of array
// re-initializing all elements to 0
void DynaArray::resetSize(int size) {
DynaArray temp (*this);
SIZE = size
delete [] array;
array = new int [SIZE];
for (int s=0; s<SIZE; s++) {
array [s] = 0;
}
If (SIZE < temp.SIZE)
for (int s=0; s<SIZE; s++) {
array [s] = temp.array[s];
}
else
for (int s=0; s<temp.SIZE; s++) {
array [s] = temp.array[s];
}
}
// Mutator - randomly initializes all elements in array
// to integer values from 0 thru 9
void DynaArray::randomInit() {
// init all elements to random numbers from 0 thru 9
for (int i = 0; i < SIZE; i++) {
array[i] = rand()%10;
}
//Mutator - appends array at the end of array
void DynaArray::append() {
resetSize(SIZE+1);
this->array[SIZE-1] = rand()%10;
}
}
/*** MAIN FUNCTION ***/
void main () {
// Declarations
DynaArray dAry1;
DynaArray dAry2;
// Set random seed
srand(time(NULL));
// Print empty array
cout << "New Array1: \n";
cout << "dAry1 = ";
dAry1.print();
cout << endl;
// Re-size array and re-print
cout << "Resized Array1: \n";
dAry1.resetSize(10);
cout << "dAry1 = ";
dAry1.print();
cout << endl;
// randomly initialize array and re-print
cout << "Re-initialized Array1: \n";
dAry1.randomInit();
cout << "dAry1 = ";
dAry1.print();
cout << endl;
// copy dAry1 into dAry2 and print dAry2
cout << "Array2 is assigned Array1: \n";
dAry2 = dAry1;
cout << "dAry2 = ";
dAry2.print();
cout << endl;
// Re-size dAry2 & re-initialize elements randomly
dAry2.resetSize(6);
dAry2.randomInit();
// re-print both arrays
cout << "Down-sized and reinitialized Array2: \n";
cout << "dAry1 = ";
dAry1.print();
cout << "dAry2 = ";
dAry2.print();
cout << endl;
// Re-size dAry2
dAry2.resetSize(12);
// re-print dAry2
cout << "Up-sized Array2: \n";
cout << "dAry2 = ";
dAry2.print();
cout << endl;
// Append to dAry2
dAry2.append();
// re-print dAry2
cout << "Appended value to Array2: \n";
cout << "dAry2 = ";
dAry2.print();
cout << endl;
}]
the following errors i get are
error C2143: syntax error : missing ')' before 'const'
error C2059: syntax error : ')'
error C2059: syntax error : '.'
error C2057: expected constant expression
error C2466: cannot allocate an array of constant size 0
error C2059: syntax error : '.'
error C2601: '=' : local function definitions are illegal
error C2601: 'getSize' : local function definitions are illegal
error C2601: 'getArrayElement' : local function definitions are illegal
error C2601: 'print' : local function definitions are illegal
error C2601: 'setArrayElement' : local function definitions are illegal
error C2601: 'resetSize' : local function definitions are illegal
error C2601: 'randomInit' : local function definitions are illegal
error C2601: 'main' : local function definitions are illegal
fatal error C1004: unexpected end of file found
1) Please indent your code.
2) What are the line numbers of the errors?
3) void main () should be int main (void)
This article has been dead for over six months. Start a new discussion instead. | __label__pos | 0.998515 |
Google
Trailing-Edge - PDP-10 Archives - BB-H311B-RM - rsx20f-swskit/documentation/booting-source-packs.mem
There are no other files named booting-source-packs.mem in the archive.
+---------------+
| d i g i t a l | I n t e r o f f i c e M e m o r a n d u m
+---------------+
Subj: Creating a bootable source pack
The attached document explains how to take a released RSX-20F
source pack from SDC and make it into a bootable RSX-20F system
pack. The process differs for each operating system and the
document attempts to explain these differences.
This document deals with source packs for TOPS-20 release 4,
TOPS-10 release 7.00, and later.
RSX-20F BOOTABLE SOURCE PACK
1.0 INTRODUCTION
This document explains how to convert a RSX-20F source pack into a
bootable RSX-20F system device. The pack is not shipped as a
bootable system because of the three different front end systems
available on the pack (TOPS-10/1090, TOPS-10/1091,
TOPS-20/2040/2050/2060).
Through out the rest of this document the following terms will be
used to distinguish between systems:
1. TOPS-10 will refer to a TOPS-10/1090 system and TOPS-20
will refer to a TOPS-10/1091 and TOPS-20/2040/2050/2060
system unless otherwise specified. Although the systems
are different the procedures for a TOPS-10/1091 and a
TOPS-20/2040/2050/2060 are the basically the same because
of the front end system device(floppy).
2. The front end default system device is refered to as dd0:
where dd is DX (floppy) for TOPS-20 and DT (DECtape) for
TOPS-10.
3. The system UIC on the source pack will be refered to as
uic where uic has the following values:
1. 10 - TOPS-10/1090
2. 15 - TOPS-10/1091
3. 20 - TOPS-20/2040/2050/2060
2.0 PREPARATION
The following items are necessary for creating a bootable RSX-20F
system pack from a released RSX-20F source pack:
1. Bootable RSX-20F system pack on dual ported drive
2. Released RSX-20F source pack on dual ported drive
3. One hour stand-alone time on KL system
RSX-20F BOOTABLE SOURCE PACK Page 2
4. Scratch Front End default system device media:
1. One DECtape for TOPS-10
2. Two floppies for TOPS-20
3.0 PROCEDURE
In brief:
1. Boot front end stand-alone
2. Initialize scratch media
3. Mount source pack and scratch media
4. Create system area on scratch media
5. Copy system dependent area to default area on source pack
6. Copy minimum system tasks to scratch media
7. Boot source pack
8. Define source pack as system pack
9. Save system and write bootstrap on source pack
3.1 Boot Front End Stand-alone
The process of making the source pack bootable requires a quiescent
system (ie. no jobs running on the KL10) and a bootable RSX-20F
system to run under until the source pack is ready to boot. When
this process is started timesharing should be halted and the front
end should be re-booted from the switch registers with switch 2
off. Make sure that the 2 packs and scratch media are mounted.
Once this is accomplished the PARSER task should be invoked with
the control-\ command at the CTY. The PARSER will then prompt
with:
PAR>
RSX-20F BOOTABLE SOURCE PACK Page 3
3.2 Initialize Scratch Media
The scratch media should be initialized so as to be certain of what
is on the media. For TOPS-10 the /INDX=MID switch should be used
and for TOPS-20 the process should be repeated except substituting
DX1: for DX0: the second time.
PAR> M INI <cr>
INI> dd0: (/INDX=MID for TOPS-10 DECtape) <cr>
^\ (control \)
PAR>
3.3 Mount Source Pack And Scratch Media
The source pack and scratch media are mounted now so they can be
accessed. The source pack should be mounted write enabled.
PAR> M MOU <cr>
MOU> DBn: <cr> (n is unit number source pack mounted on)
MOU> dd0: <cr>
MOU> ^Z (control Z)
^\ (control \)
PAR>
3.4 Create System Area On Scratch Media
A default system UIC must be created on the scratch media. This
area will be searched when the virgin system is booted.
PAR> M UFD <cr>
UFD> dd0:[5,5] <cr>
^\ (control \)
PAR>
3.5 Copy System Dependent Area To Default Area On Source Pack
The particular system images of the system being created must be
copied from the correct area ([10,5]=TOPS-10/1090,
[15,5]=TOPS-10/1091, [20,5]=TOPS-20/2040/2050/2060) to the default
system area. Only the RSX-20F system image, map and tasks are
copied.
PAR> M PIP <cr>
PIP> DBn:[5,5]/NV=DBn:[uic,5]RSX20F.*;0,*.TSK;0 <cr>
RSX-20F BOOTABLE SOURCE PACK Page 4
3.6 Copy Minimum System Tasks To Scratch Media
Once the new system is booted only the default system area ([5,5])
on the default system device (floppy or DECtape) is known about.
When control-backslash is typed dd0:[5,5] is searched for
PARSER.TSK. If it is not found an error occurs. Inorder to find
PARSER.TSK the file system (F11ACP.TSK) must be loaded from
dd0:[5,5]. To mount the new system pack the task terminator
(TKTN.TSK) and mount task (MOU.TSK) must also reside on dd0:[5,5].
PIP> dd0:[5,5]=DBn:[5,5]F11ACP.TSK;0,TKTN,PARSER,MOU <cr>
PIP> ^Z (control Z)
^\ (control \)
PAR>
3.7 Boot Source Pack
Now the system image on the source pack must be booted into core.
When the new system is in core the system device is directed to the
default media, as a result of this till the system device is
redirected to the source pack the commands after the booting will
be relatively slow due to the speed of the default system media.
PAR> M BOO <cr>
BOO> DBn: <cr>
^\ (control \)
PAR>
3.8 Define Source Pack As System Pack
Inorder to write the bootstrap block onto the source pack the
system device must be redirected to the source pack. Once the
source pack is mounted any known task can be requested because
RSX-20F searches all mounted devices for the task not just SY:.
PAR> M MOU <cr>
MOU> DBn: <cr>
MOU> ^Z (control Z)
^\ (control \)
PAR> M RED <cr>
RED> DBn:=SY: <cr>
^\ (control \)
PAR>
RSX-20F BOOTABLE SOURCE PACK Page 5
3.9 Save System And Write Bootstrap On Source Pack
Now the bootstrap block can be written on the source pack. Before
this is done any default parameters can be set by the PARSER and
then saved. Once the bootstrap is written the system is
automatically rebooted and SY: is redirected to the device the
system was booted from.
PAR> set date and any other permenant default parameters
PAR> M SAV <cr>
SAV> /WB <cr>
^\ (control \)
PAR>
4.0 CONCLUSION
The RSX-20F source pack is now a bootable system pack, the only
thing it needs now to make it a complete system are the microcode
(*.MCB) files and the KL boot (*.EXB) files. These files can be
can be gotten from the floppies of the same release as the source
pack. Assuming the correct files are on the device dd0: then they
can be copied to the source pack as follows:
PAR> M PIP <cr>
PIP> DBn:/NV=dd0:[5,5]*.EXB;0 <cr>
PIP> DBn:=dd0:[5,5]*.MCB;0 <cr>
PIP> ^Z (control Z)
^\ (control \)
PAR>
At this point the source pack can now be used as a system pack. | __label__pos | 0.831798 |
Let's say you've just finished conducting traditional (moderated, one-on-one) user tests on a website. Naturally, you have noted whether or not each of your subjects managed to complete each assigned task. Perhaps you've even timed how long it took each subject to complete each task. My question is this: in writing up the results of the tests, how should you describe the performance results?
I recently read a report that was loaded with references such as “67% of users” did such-and-so, and “83% of users” did this-or-that. Personally, I think it is a mistake to name precise numbers like this.
User tests as I've described above are not intended to be quantitative/inferential in nature. Rather, they are qualitative, employed to gain insight into how real users interact with a website. If you start naming precise numbers (particularly as percentage points) you're implying the numbers are statistically relevant. They aren't. Generally, user tests are conducted on between 5 and 10 users, not nearly enough to gain statistically significant results.
By implying significance where none exists, you risk destroying your credibility. If anyone reading your report has ever taken a course in statistics, such numbers will jump out at them. They'll know right away that, given the small sample size, the numbers you've quoted can't possibly be statistically significant. And even though your report may be full of valuable insights, everything you have written will be tarnished because – in the reader's view – your findings are suspect.
Naturally, in presenting your results, you need to make reference to user test performance. But I think it's much wiser (and safer) to keep such references broad and conversational. For example:
• “In our tests, only our least web-savvy test subject failed to complete this task in a reasonable time. All others breezed through it.”
• “Half of our subjects failed this task.”
• “Four of our test subjects didn't mind the multimedia presentation on the Home page. But two subjects found it very annoying and indicated that in a real scenario, they'd have left the site immediately.”
Note that in some of the examples above, I have in fact named numbers. But by keeping it conversational and not naming percentages, I'm not implying statistical significance. Not only is this more honest, it's also more credible: nobody can dispute my claims of significance, because I haven't made any.
The bottom line is this: in writing up user test reports, focus on the insights gained. Explain where users stumbled, and why they stumbled. Don't risk putting your recommendations into question by implying statistical significance. | __label__pos | 0.790022 |
Difference between Augmented Reality, Virtual reality and Mixed Reality
Augmented reality (AR), Virtual reality (VR), and Mixed Reality (MR) are the trendiest topics in the tech world. They have gained massive attention from the media and major tech companies are heavily investing in them. These three technologies are causing major ripples in all aspects of life. They are used in education, entertainment, communication, medical and many other industries. But, the big question is, are they the same? What are the differences between them?
Augmented Reality vs Virtual reality vs Mixed Reality
Despite having glaring similarities, AR, VR, and MR are three different technologies. In this article, we are going to discuss key differences between Augmented Reality vs Virtual Reality vs Mixed Reality. At the end of it all, you will be able to draw well-defined borderlines between these three technologies.
1. Virtual Reality
Virtual reality
From the name, you can easily tell what the VR technology does. It simply immerses you deep into the virtual world. The technology generates a virtual environment which you will interact with. You become part of this environment and you can move around in it. VR is an intuitive technology that has an impact on at least two of the five senses. These are sight and sound. The technology creates a strong perception of being in a completely different place even when in the real sense you are not there.
How is the VR achieved? You first need to wear a headset which is accompanied by an input device which is connected to a computer that is responsible for generating the virtual environment. If you are using a mobile device, the headset and the computer are combined to deliver the virtual world.
Although the headset is the primary hardware for a VR setup, more input devices can be added to immerse a user deeper into the virtual world. For instance, it can be enhanced with motion trackers, haptic devices and treadmills. The VR headset is designed to look like some kind of goggles. It has a set of a lens which is specially aligned to give a 3D effect. The headset is powered by a computer, gaming console or even a mobile device. There are some special software and sensors which are responsible for creating an intuitive virtual environment.
The most unique aspect about VR technology is it delivers high levels of immersion. There is some sense of realism in the way that you interact with the new environment.
Virtual Reality technology is heavily used in the entertainment world. For instance, it is used in video games to enhance the gaming experience. The technology also improves the movie-watching experience whereby it brings the virtual theatre. VR is already used in medical, military and business worlds.
2. Augmented Reality
Augmented-reality
Unlike the VR technology, Augmented Reality does not take you to the virtual world. It only enhances objects in your current world by superimposing virtual images into it. In simpler terms, AR places virtual objects into an environment that exists in the real world. For instance, the technology makes it possible to see a book on your table through your phone. The most popular application of AR technology is the PokemonGo.
One of the key features that make AR completely different from VR is it combines the real world objects and those that have been generated by the computer. For this reason, it does not provide a completely immersive experience. This is unlike the VR technology that simulates the environment.
Another outstanding feature of AR technology is the type of hardware used. Unlike VR which heavily relies on a set of special external hardware, AR can be achieved without any external devices. Your smartphone is powerful enough to deliver this technology. However, this doesn’t mean that you can’t use an external device. There are special headsets for AR but they are slightly different from the ones for VR. For AR the headset needs to be transparent unlike for VR where the headsets are opaque.
To achieve an AR, you only need a smartphone and an AR app. The camera of your smartphone should have the capacity to capture the real environment around you. On the other hand, the software or app will project and calculate computer-generated objects.
One perfect example of AR wearable product is the Google Glass. It is designed to display a digital overlay right in front of the users. Virtual Reality has numerous applications. In the medical world, it is used to connect surgeons with each other, especially when they are performing the same surgery. Engineers use the technology to create schematics. AR can also be used to create interactive input options which can take the place of an ordinary keyboard.
3. Mixed Reality
mixed-reality
Many people tend to confuse MR and VR. This is mainly because both of them are viewed as crossover technologies. In reality, there is a thin borderline between AR and MR. In fact, Mixed Reality tends to combine the best features of virtual reality and those of augmented reality.
Mixed reality combines both the aspects of the virtual world with those of the real world. It enables users to interact with the two worlds. Unlike the AR, virtual objects in the MR are not just overlaid. In fact, you can interact with them to the fullest. This is the main feature that distinguishes MR from AR. On the other hand, a user remains in a real-world environment. This makes the technology to be different from the VR.
Mixed Reality starts with the real world. As time goes by, the digital objects are introduced into this world and a user is immersed in a virtual environment. Sounds like a VR, right? In MR, there is a direct connection between the real world and the virtual environment. This is unlike in VR where the two are not connected.
Most people view MR as an improvement of the AR. It adds some sense of immersion to the AR. There still an ongoing research on the MR technology. One of the popular devices is the HoloLens from Microsoft.
Conclusion
In conclusion, Virtual Reality, Augmented Reality and Mixed Reality are new technologies that are changing the way we view things. Despite their differences, all of them are used to achieve their special purposes.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | __label__pos | 0.780792 |
The Algorithms logo
The Algorithms
AboutDonate
Heap Element
N
L
package com.thealgorithms.datastructures.heaps;
/**
* Class for heap elements.<br>
*
* <p>
* A heap element contains two attributes: a key which will be used to build the
* tree (int or double, either primitive type or object) and any kind of
* IMMUTABLE object the user sees fit to carry any information he/she likes. Be
* aware that the use of a mutable object might jeopardize the integrity of this
* information.
*
* @author Nicolas Renard
*/
public class HeapElement {
private final double key;
private final Object additionalInfo;
// Constructors
/**
* @param key : a number of primitive type 'double'
* @param info : any kind of IMMUTABLE object. May be null, since the
* purpose is only to carry additional information of use for the user
*/
public HeapElement(double key, Object info) {
this.key = key;
this.additionalInfo = info;
}
/**
* @param key : a number of primitive type 'int'
* @param info : any kind of IMMUTABLE object. May be null, since the
* purpose is only to carry additional information of use for the user
*/
public HeapElement(int key, Object info) {
this.key = key;
this.additionalInfo = info;
}
/**
* @param key : a number of object type 'Integer'
* @param info : any kind of IMMUTABLE object. May be null, since the
* purpose is only to carry additional information of use for the user
*/
public HeapElement(Integer key, Object info) {
this.key = key;
this.additionalInfo = info;
}
/**
* @param key : a number of object type 'Double'
* @param info : any kind of IMMUTABLE object. May be null, since the
* purpose is only to carry additional information of use for the user
*/
public HeapElement(Double key, Object info) {
this.key = key;
this.additionalInfo = info;
}
/**
* @param key : a number of primitive type 'double'
*/
public HeapElement(double key) {
this.key = key;
this.additionalInfo = null;
}
/**
* @param key : a number of primitive type 'int'
*/
public HeapElement(int key) {
this.key = key;
this.additionalInfo = null;
}
/**
* @param key : a number of object type 'Integer'
*/
public HeapElement(Integer key) {
this.key = key;
this.additionalInfo = null;
}
/**
* @param key : a number of object type 'Double'
*/
public HeapElement(Double key) {
this.key = key;
this.additionalInfo = null;
}
// Getters
/**
* @return the object containing the additional info provided by the user.
*/
public Object getInfo() {
return additionalInfo;
}
/**
* @return the key value of the element
*/
public double getKey() {
return key;
}
// Overridden object methods
public String toString() {
return "Key: " + key + " - " + additionalInfo.toString();
}
/**
* @param otherHeapElement
* @return true if the keys on both elements are identical and the
* additional info objects are identical.
*/
@Override
public boolean equals(Object o) {
if (o != null) {
if (!(o instanceof HeapElement)) {
return false;
}
HeapElement otherHeapElement = (HeapElement) o;
return (
(this.key == otherHeapElement.key) &&
(this.additionalInfo.equals(otherHeapElement.additionalInfo))
);
}
return false;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (int) key;
result =
31 *
result +
(additionalInfo != null ? additionalInfo.hashCode() : 0);
return result;
}
} | __label__pos | 0.999783 |
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
These two are both very similar. The first is -Let $V$ and $W$ be finite dimensional vector spaces over a field $F$. Let $T:V\to W$ be a linear transformation. Suppose that $T$ is one-to-one. Show that there is a linear transformation $L:W\to V$ such that $LT=1v$? ----> So far, I have, If $T:V\to W$ then $B_v =\{v_1,\dots,v_n\}$ is a basis for $V$ and $B_w=\{w_1,\dots,w_m\}$ is basis for $W$. where $v=a_1v_1+\dots+a_nv_n$ and $T(v)=b_1w_1+\dots+b_mw_m$. confused where to go from there?
The second question is Let $T:V\to W$ and $L:W\to V$ be a linear transformation. Show 1.) $T$ is injective if $LT=1v$ and 2.) $T$ is surjective if $TL=1w$ For this one, all i know is that you have to show that $L$ is unique and that $L$ is linear.
Im really bad at linear transformations, so I could really use the guidance. Thanks!
share|cite|improve this question
1
This can help: drexel28.wordpress.com/2010/11/30/… (Note that the result is true for set-functions. You just need to check that the left-invese, defined as in the link, is indeed also linear transformation in your case) – user39280 Mar 8 '13 at 20:36
up vote 1 down vote accepted
You are looking for a left inverse, usually called a retraction. (Look at this.)
We need to define $L:W \to V$. For any $w$ in the image of $T$, so there is a $v \in V$ with $T(v) = w$, we can define $L(w) = v$. Why does that make sense? The transformation T is one-to-one, so the $v$ that we're using is unique.
What to do about $L(w)$ for $w \notin \operatorname{Im} T$? Hint: every $w = w_0 + w_1$ for some $w_0 \in \operatorname{Im} T$.
share|cite|improve this answer
Yes first part makes sense, but what does ImT mean? Isomorphism T? – kkkk Mar 8 '13 at 20:52
Im just not familiar with the Id_y in the link? – kkkk Mar 8 '13 at 20:56
Im T means "the image of T," which is the set of outputs of the function. – Sammy Black Mar 8 '13 at 22:49
Id_Y just means the identity function on Y. That is the function whose output is the same as input: Id_Y (y) = y for all y in Y. – Sammy Black Mar 8 '13 at 23:07
Im kinda confused on the last part there. However, if in fact T is one-to-one. The the nullity(T)=0 correct? So – kkkk Mar 8 '13 at 23:08
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.869817 |
1.0.0[][src]Struct std::ffi::CStr
pub struct CStr { /* fields omitted */ }
Representation of a borrowed C string.
This type represents a borrowed reference to a nul-terminated array of bytes. It can be constructed safely from a &[u8] slice, or unsafely from a raw *const c_char. It can then be converted to a Rust &str by performing UTF-8 validation, or into an owned CString.
&CStr is to CString as &str is to String: the former in each pair are borrowed references; the latter are owned strings.
Note that this structure is not repr(C) and is not recommended to be placed in the signatures of FFI functions. Instead, safe wrappers of FFI functions may leverage the unsafe from_ptr constructor to provide a safe interface to other consumers.
Examples
Inspecting a foreign C string:
This example is not tested
use std::ffi::CStr;
use std::os::raw::c_char;
extern { fn my_string() -> *const c_char; }
unsafe {
let slice = CStr::from_ptr(my_string());
println!("string buffer size without nul terminator: {}", slice.to_bytes().len());
}Run
Passing a Rust-originating C string:
This example is not tested
use std::ffi::{CString, CStr};
use std::os::raw::c_char;
fn work(data: &CStr) {
extern { fn work_with(data: *const c_char); }
unsafe { work_with(data.as_ptr()) }
}
let s = CString::new("data data data data").expect("CString::new failed");
work(&s);Run
Converting a foreign C string into a Rust String:
This example is not tested
use std::ffi::CStr;
use std::os::raw::c_char;
extern { fn my_string() -> *const c_char; }
fn my_string_safe() -> String {
unsafe {
CStr::from_ptr(my_string()).to_string_lossy().into_owned()
}
}
println!("string: {}", my_string_safe());Run
Methods
impl CStr[src]
pub unsafe fn from_ptr<'a>(ptr: *const c_char) -> &'a CStr[src]
Wraps a raw C string with a safe C string wrapper.
This function will wrap the provided ptr with a CStr wrapper, which allows inspection and interoperation of non-owned C strings. The total size of the raw C string must be smaller than isize::MAX bytes in memory due to calling the slice::from_raw_parts function. This method is unsafe for a number of reasons:
• There is no guarantee to the validity of ptr.
• The returned lifetime is not guaranteed to be the actual lifetime of ptr.
• There is no guarantee that the memory pointed to by ptr contains a valid nul terminator byte at the end of the string.
• It is not guaranteed that the memory pointed by ptr won't change before the CStr has been destroyed.
Note: This operation is intended to be a 0-cost cast but it is currently implemented with an up-front calculation of the length of the string. This is not guaranteed to always be the case.
Examples
This example is not tested
use std::ffi::CStr;
use std::os::raw::c_char;
extern {
fn my_string() -> *const c_char;
}
unsafe {
let slice = CStr::from_ptr(my_string());
println!("string returned: {}", slice.to_str().unwrap());
}Run
pub fn from_bytes_with_nul(bytes: &[u8]) -> Result<&CStr, FromBytesWithNulError>1.10.0[src]
Creates a C string wrapper from a byte slice.
This function will cast the provided bytes to a CStr wrapper after ensuring that the byte slice is nul-terminated and does not contain any interior nul bytes.
Examples
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"hello\0");
assert!(cstr.is_ok());Run
Creating a CStr without a trailing nul terminator is an error:
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"hello");
assert!(cstr.is_err());Run
Creating a CStr with an interior nul byte is an error:
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"he\0llo\0");
assert!(cstr.is_err());Run
pub const unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) -> &CStr1.10.0[src]
Unsafely creates a C string wrapper from a byte slice.
This function will cast the provided bytes to a CStr wrapper without performing any sanity checks. The provided slice must be nul-terminated and not contain any interior nul bytes.
Examples
use std::ffi::{CStr, CString};
unsafe {
let cstring = CString::new("hello").expect("CString::new failed");
let cstr = CStr::from_bytes_with_nul_unchecked(cstring.to_bytes_with_nul());
assert_eq!(cstr, &*cstring);
}Run
pub const fn as_ptr(&self) -> *const c_char[src]
Returns the inner pointer to this C string.
The returned pointer will be valid for as long as self is, and points to a contiguous region of memory terminated with a 0 byte to represent the end of the string.
WARNING
The returned pointer is read-only; writing to it (including passing it to C code that writes to it) causes undefined behavior.
It is your responsibility to make sure that the underlying memory is not freed too early. For example, the following code will cause undefined behavior when ptr is used inside the unsafe block:
use std::ffi::CString;
let ptr = CString::new("Hello").expect("CString::new failed").as_ptr();
unsafe {
// `ptr` is dangling
*ptr;
}Run
This happens because the pointer returned by as_ptr does not carry any lifetime information and the CString is deallocated immediately after the CString::new("Hello").expect("CString::new failed").as_ptr() expression is evaluated. To fix the problem, bind the CString to a local variable:
use std::ffi::CString;
let hello = CString::new("Hello").expect("CString::new failed");
let ptr = hello.as_ptr();
unsafe {
// `ptr` is valid because `hello` is in scope
*ptr;
}Run
This way, the lifetime of the CString in hello encompasses the lifetime of ptr and the unsafe block.
Important traits for &'_ [u8]
pub fn to_bytes(&self) -> &[u8][src]
Converts this C string to a byte slice.
The returned slice will not contain the trailing nul terminator that this C string has.
Note: This method is currently implemented as a constant-time cast, but it is planned to alter its definition in the future to perform the length calculation whenever this method is called.
Examples
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"foo\0").expect("CStr::from_bytes_with_nul failed");
assert_eq!(cstr.to_bytes(), b"foo");Run
Important traits for &'_ [u8]
pub fn to_bytes_with_nul(&self) -> &[u8][src]
Converts this C string to a byte slice containing the trailing 0 byte.
This function is the equivalent of to_bytes except that it will retain the trailing nul terminator instead of chopping it off.
Note: This method is currently implemented as a 0-cost cast, but it is planned to alter its definition in the future to perform the length calculation whenever this method is called.
Examples
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"foo\0").expect("CStr::from_bytes_with_nul failed");
assert_eq!(cstr.to_bytes_with_nul(), b"foo\0");Run
pub fn to_str(&self) -> Result<&str, Utf8Error>1.4.0[src]
Yields a &str slice if the CStr contains valid UTF-8.
If the contents of the CStr are valid UTF-8 data, this function will return the corresponding &str slice. Otherwise, it will return an error with details of where UTF-8 validation failed.
Note: This method is currently implemented to check for validity after a constant-time cast, but it is planned to alter its definition in the future to perform the length calculation in addition to the UTF-8 check whenever this method is called.
Examples
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"foo\0").expect("CStr::from_bytes_with_nul failed");
assert_eq!(cstr.to_str(), Ok("foo"));Run
pub fn to_string_lossy(&self) -> Cow<str>1.4.0[src]
Converts a CStr into a Cow<str>.
If the contents of the CStr are valid UTF-8 data, this function will return a Cow::Borrowed([&str]) with the corresponding [&str] slice. Otherwise, it will replace any invalid UTF-8 sequences with U+FFFD REPLACEMENT CHARACTER and return a Cow::Owned(String) with the result.
Note: This method is currently implemented to check for validity after a constant-time cast, but it is planned to alter its definition in the future to perform the length calculation in addition to the UTF-8 check whenever this method is called.
Examples
Calling to_string_lossy on a CStr containing valid UTF-8:
use std::borrow::Cow;
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"Hello World\0")
.expect("CStr::from_bytes_with_nul failed");
assert_eq!(cstr.to_string_lossy(), Cow::Borrowed("Hello World"));Run
Calling to_string_lossy on a CStr containing invalid UTF-8:
use std::borrow::Cow;
use std::ffi::CStr;
let cstr = CStr::from_bytes_with_nul(b"Hello \xF0\x90\x80World\0")
.expect("CStr::from_bytes_with_nul failed");
assert_eq!(
cstr.to_string_lossy(),
Cow::Owned(String::from("Hello �World")) as Cow<'_, str>
);Run
pub fn into_c_string(self: Box<CStr>) -> CString1.20.0[src]
Converts a Box<CStr> into a CString without copying or allocating.
Examples
use std::ffi::CString;
let c_string = CString::new(b"foo".to_vec()).expect("CString::new failed");
let boxed = c_string.into_boxed_c_str();
assert_eq!(boxed.into_c_string(), CString::new("foo").expect("CString::new failed"));Run
Trait Implementations
impl<'_> From<&'_ CStr> for Box<CStr>1.17.0[src]
impl<'a> From<&'a CStr> for Cow<'a, CStr>1.28.0[src]
impl<'_> From<&'_ CStr> for Arc<CStr>1.24.0[src]
impl<'_> From<&'_ CStr> for Rc<CStr>1.24.0[src]
impl<'_> From<&'_ CStr> for CString1.7.0[src]
impl Debug for CStr1.3.0[src]
impl PartialEq<CStr> for CStr[src]
impl Eq for CStr[src]
impl Ord for CStr[src]
impl PartialOrd<CStr> for CStr[src]
impl Hash for CStr[src]
impl AsRef<CStr> for CStr1.7.0[src]
impl AsRef<CStr> for CString1.7.0[src]
impl<'_> Default for &'_ CStr1.10.0[src]
impl Borrow<CStr> for CString1.3.0[src]
impl ToOwned for CStr1.3.0[src]
type Owned = CString
The resulting type after obtaining ownership.
Auto Trait Implementations
impl UnwindSafe for CStr
impl RefUnwindSafe for CStr
impl Unpin for CStr
impl Send for CStr
impl Sync for CStr
Blanket Implementations
impl<T> Borrow<T> for T where
T: ?Sized
[src]
impl<T> BorrowMut<T> for T where
T: ?Sized
[src]
impl<T> Any for T where
T: 'static + ?Sized
[src]
impl<T> ToOwned for T where
T: Clone
[src]
type Owned = T
The resulting type after obtaining ownership. | __label__pos | 0.723478 |
Enemy.qml Example File
demos/CrazyCarousel/qml/entities/Enemy.qml
import QtQuick 2.0
import Felgo 3.0
EntityBase {
id: enemy
entityType: "enemy"
property alias image: image
property double speed: 10 * gameScene.logic.speedUp
MultiResolutionImage {
id: image
width: parent.width
height: parent.height
}
// provides up/down movement
MovementAnimation {
id: upDownMovement
target: enemy // set animation target to enemy object
property: "y" // we animate the y position
velocity: -enemy.speed // start with up movement
running: true // animation starts automatically
// limits the y property (defines possible movement area)
// we set a random area for each enemy between
minPropertyValue: -(Math.random() * 10 + 5) // min y is random between -5 and -15
maxPropertyValue: Math.random() * 10 + 5 // max y is random between +5 to +15
// change direction after min/max is reached (e.g. move up after down movement is finished)
onLimitReached: {
if(upDownMovement.velocity > 0)
upDownMovement.velocity = -enemy.speed;
else
upDownMovement.velocity = enemy.speed;
}
}
}
Voted #1 for:
• Easiest to learn
• Most time saving
• Best support
Develop Cross-Platform Apps and Games 50% Faster!
• Voted the best supported, most time-saving and easiest to learn cross-platform development tool
• Based on the Qt framework, with native performance and appearance on all platforms including iOS and Android
• Offers a variety of plugins to monetize, analyze and engage users
FREE!
create apps
create games
cross platform
native performance
3rd party services
game network
multiplayer
level editor
easiest to learn
biggest time saving
best support | __label__pos | 0.871623 |
Reverse Each Word in the Sentence in JavaScript and PHP
Use Case: Showing competence in your programming interview
A question that you may be asked during a programming interview is "reverse a string" or "reverse a portion of a string". The reason why you might be asked this is because the interviewer wants to see your ability to manipulate strings.
In this post, I'm going to show you how to best answer this question in both JavaScript and PHP.
If you'd rather see this post in video format, watch the video below and be sure to subscribe to my YouTube playlist Tyler Answers Interview Questions.
JavaScript Solution
The best way to solve this dilemma in JavaScript is to create a reusable function that can be used to reverse strings given a specific separator.
In the answer below, we've created a function called rvsBy that takes a string and a separator value. We'll use this function to first reverse our entire sentence by character, and then undo the reversed order of words by calling the function on the reversed sentence.
JavaScript
const sentence = 'This is a sentence!'
const rvsBy = (string, separator) => {
return string.split(separator).reverse().join(separator)
}
const newSentence = rvsBy(sentence, '') // !ecnetnes a si sihT
const answer = rvsBy(newSentence, ' ') // sihT si a !ecnetnes
We've now answered the interview question by keeping the order of the sentence while reversing the individual words.
PHP Solution
In the PHP solution, we'll create the same basic function. There are some differences, however, due to how PHP allows us to manipulate strings and arrays.
Also, we'll have to perform a conditional statement to use str_split instead of explode when attempting to create a new array of individual characters.
Other than that, you'll see the function is overall the same.
PHP
$sentence = 'This is a sentence!';
function rvsBy($string, $separator) {
if ($separator === '') {
$array = str_split($string);
} else {
$array = explode($separator, $string);
}
$reversed = array_reverse($array);
return join($separator, $reversed);
}
$reversedSentence = rvsBy($sentence, ''); // !ecnetnes a si sihT
$answer = rvsBy($reversedSentence, ' '); // sihT si a !ecnetnes
Tweet me @tylerewillis
Or send an email:
And support me on Patreon | __label__pos | 0.996179 |
Just Launched: You can now import projects and releases from Google Code onto SourceForge
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. Read More
Close
change resend ack work
Developers
Olen
2009-12-02
2013-09-20
• Olen
Olen
2009-12-02
editing to "TFTPServerProcess.cs"
finished.
private int ImTimeCount = 0; //if send data successfully & this is add to line 864
/// <summary>
/// Check the state of the current TFTP session
/// </summary>
public void CheckStates()
{
//System.Diagnostics.StackTrace st = new System.Diagnostics.StackTrace(2, true);
//System.Diagnostics.StackFrame sf = st.GetFrame(0);
//AddMsg(Level.Debug, ident.ToString() + ": Process CheckStates" + sf.ToString());
AddMsg(Level.Debug, ident.ToString() + ": " + DateTime.Now.ToString() + "Process CheckStates timer callback started.");
lock (CurrStatesLock)
{
if (State != null)
{
if (State.TransferState.Closed)
{
AddMsg(Level.Verbose, ident.ToString() + ": " + DateTime.Now.ToString() + " CheckStates timer calling stoplistener because the state is closed.");
StopListener();
}
else
{
if ((DateTime.Now.Ticks - State.Timestamp) > 0 && (((DateTime.Now.Ticks - State.Timestamp) / 10000000) > State.TimeoutSeconds))
{
switch (State.OriginalOpcode)
{
case 1: //RRQ
ImTimeCount++;
break;
case 2://WRQ
ImTimeCount++;
//If we didn't get another DATA packet since our last ACK resend
AddMsg(Level.Info, "Resending ACK to " + State.RemoteIPAddress.ToString() + ":" + State.RemotePortNumber.ToString());
IPEndPoint RemoteEndPoint = new IPEndPoint(IPAddress.Parse(State.RemoteIPAddress), State.RemotePortNumber);
Send(RemoteEndPoint, new byte { 0, 4, State.BlockIDByte1, State.BlockIDByte2 });
break;
//case 3: //DATA
// ImTimeCount++;
// break;
//case 4: //ACK
// ImTimeCount++;
// break;
//case 5: //ERROR
// ImTimeCount++;
// break;
//case 6: //OACK
// ImTimeCount++;
// break;
default:
ImTimeCount++;
//If a state has had no activity for the specified interval remove it
AddMsg(Level.Info, "Timeout for " + State.TransferType + " request of file " + State.Filename + " connection " + State.RemoteIPAddress.ToString() + ":" + State.RemotePortNumber.ToString());
AddMsg(Level.Verbose, "Timeout was " + ((DateTime.Now.Ticks - State.Timestamp) / 10000000).ToString() + " seconds");
State.ErrorOccurred = true;
State.ErrorMsg = "Timeout occured for " + State.TransferType + " request of file " + State.Filename + " from " + State.RemoteIPAddress.ToString();
State.Close();
}
if (ImTimeCount > Timeout)
{
ImTimeCount++;
//If a state has had no activity for the specified interval remove it
AddMsg(Level.Info, "Timeout for " + State.TransferType + " request of file " + State.Filename + " connection " + State.RemoteIPAddress.ToString() + ":" + State.RemotePortNumber.ToString());
AddMsg(Level.Verbose, "Timeout was " + ((DateTime.Now.Ticks - State.Timestamp) / 10000000).ToString() + " seconds");
State.ErrorOccurred = true;
State.ErrorMsg = "Timeout occured for " + State.TransferType + " request of file " + State.Filename + " from " + State.RemoteIPAddress.ToString();
State.Close();
}
/*
if (((DateTime.Now.Ticks - State.Timestamp) / 10000000) > State.TimeoutSeconds)
{
//If a state has had no activity for the specified interval remove it
AddMsg(Level.Info, "Timeout for " + State.TransferType + " request of file " + State.Filename + " connection " + State.RemoteIPAddress.ToString() + ":" + State.RemotePortNumber.ToString());
AddMsg(Level.Verbose, "Timeout was " + ((DateTime.Now.Ticks - State.Timestamp) / 10000000).ToString() + " seconds");
State.ErrorOccurred = true;
State.ErrorMsg = "Timeout occured for " + State.TransferType + " request of file " + State.Filename + " from " + State.RemoteIPAddress.ToString();
State.Close();
}
else if ((((DateTime.Now.Ticks - State.Timestamp) / 10000000) > State.TimeoutSeconds) && (State.OriginalOpcode == 2))
{
//If we didn't get another DATA packet since our last ACK resend
AddMsg(Level.Info, "Resending ACK to " + State.RemoteIPAddress.ToString() + ":" + State.RemotePortNumber.ToString());
IPEndPoint RemoteEndPoint = new IPEndPoint(IPAddress.Parse(State.RemoteIPAddress), State.RemotePortNumber);
Send(RemoteEndPoint, new byte { 0, 4, State.BlockIDByte1, State.BlockIDByte2 });
}
*/
}
}
}
else if (TimeoutCounter > Timeout)
{
AddMsg(Level.Verbose, ident.ToString() + ": Stopping listener because no active sessions for timeout peroid.");
this.StopListener();
}
else
{
//The listener should not be running if there is not an active session
++TimeoutCounter;
}
}
AddMsg(Level.Debug, ident.ToString() + ": " + "Process CheckStates timer callback finished " + DateTime.Now.ToString());
}
and edit … private void SendNextDataDatagram(…)
private void SendNextDataDatagram(byte ReceivedBytes, IPEndPoint RemoteEndPoint, string EndPointString)
{
try
{
ImTimeCount = 0; //if send data successfully
byte ………
}…………..
• Tom Kuhn
Tom Kuhn
2013-09-20
Are any of these changes in the trunk or branches? Are they in the latest binaries?
| __label__pos | 0.865924 |
Custom Snapshots
Custom snapshots were totally reworked with the extensible snapshots overhaul in Stack 1.6.0, see the writeup and PR #3249). This documentation covers the new syntax only.
Custom snapshots allow you to create your own snapshots, which provide a list of packages to use, along with flags, ghc-options, and a few other settings. Custom snapshots may extend any other snapshot that can be specified in a resolver field. The packages specified follow the syntax of extra-deps in the stack.yaml file, with one exception: to ensure reproducibility of snapshots, local directories are not allowed for custom snapshots (as they are expected to change regularly).
resolver: lts-8.21 # Inherits GHC version and package set
compiler: ghc-8.0.1 # Overwrites GHC version in the resolver, optional
name: my-snapshot # User-friendly name
# Additional packages, follows extra-deps syntax
packages:
- unordered-containers-0.2.7.1
- hashable-1.2.4.0
- text-1.2.2.1
# Override flags, can also override flags in the parent snapshot
flags:
unordered-containers:
debug: true
# Packages from the parent snapshot to ignore
drop-packages:
- wai-extra
# Packages which should be hidden (affects script command's import
# parser
hidden:
wai: true
warp: false
# Set GHC options for specific packages
ghc-options:
warp:
- -O2
If you put this in a snapshot.yaml file in the same directory as your project, you can now use the custom snapshot like this:
resolver: snapshot.yaml
This is an example of a custom snapshot stored in the filesystem. They are assumed to be mutable, so you are free to modify it. We detect that the snapshot has changed by hashing the contents of the involved files, and using it to identify the snapshot internally. It is often reasonably efficient to modify a custom snapshot, due to stack sharing snapshot packages whenever possible.
Using a URL instead of a filepath
For efficiency, URLs are treated differently. If I uploaded the snapshot to https://domain.org/snapshot-1.yaml, it is expected to be immutable. If you change that file, then you lose any reproducibility guarantees.
Overriding the compiler
The following snapshot specification will be identical to lts-7.1, but instead use ghc-7.10.3 instead of ghc-8.0.1:
resolver: lts-7.1
compiler: ghc-7.10.3
Dropping packages
The following snapshot specification will be identical to lts-7.1, but without the text package in our snapshot. Removing this package will cause all the packages that depend on text to be unbuildable, but they will still be present in the snapshot.
resolver: lts-7.1
drop-packages:
- text
Specifying ghc-options
In order to specify ghc-options for a package, you use the same syntax as the ghc-options field for build configuration. The following snapshot specification will be identical to lts-7.1, but provides -O1 as a ghc-option for text:
resolver: lts-7.1
packages:
- text-1.2.2.1
ghc-options:
text: -O1
This works somewhat differently than the stack.yaml ghc-options field, in that options can only be specified for packages that are mentioned in the custom snapshot's packages list. It sets the ghc-options, rather than extending those specified in the snapshot being extended.
Another difference is that the * entry for ghc-options applies to all packages in the packages list, rather than all packages in the snapshot.
Specifying flags
In order to specify flags for a package, you use the same syntax as the flags field for build configuration. The following snapshot specification will be identical to lts-7.1, but it enables the developer cabal flag:
resolver: lts-7.1
packages:
- text-1.2.2.1
flags:
text:
developer: true | __label__pos | 0.939346 |
create a C++ program:
Design a class named Account that contains:
¦An int data field named id for the account.
¦A double data field named balance for the account
¦A double data field name annualInterestRate that stores the current interest rate.
¦A no-arg constructor that creates a default account with id 0, balance 0, and annualInterestRate 0.
¦The accessor and mutator functions for id, balance, and annualInterestRate.
¦A function named getMonthlyInterestRate() that returns the monthly interest rate.
¦A function named withdraw that withdraws a specified amount from the account.
¦A function named deposit that deposits a specified amount to the account.
Write a test program that creates an Account object with an account ID of 1122, a balance of 20000, and an annual interest rate of 4.5%. Use the withdraw function to withdraw $2500, use the deposit function to deposit $3000, and print the balance, the monthly interest, and the date when this account was created.
Answers
Detailed answers to tough
homework problems | __label__pos | 0.923137 |
Twilight Render Tutorials - Version 2 TwilightLogo2 sm min
Basic Tutorials
Here we explore the basic features of Twilight Render V2. New users should definitely start here to familiarize themselves with how Twilight Render takes your SketchUp geometry and turns it into a beatifully rendered image.
1. Preflight Checklist, Before You Render
2. Basic Materials
3. Basic Lighting
4. Basic Environment
5. Basic Rendering
6. Exploration Rendering
7. Image Post-Processing
8. Animation
9. Importing Material Libraries
Exploration Thumb2
Intermediate Tutorials
Here we explore Twilight Render V2 features a little deeper. After going through the basics, you will quickly ramp up your workflow with these tips for getting the quickest results. Many of these tutorials apply to the V2 Pro version only.
1. Intermediate Materials 1
2. Intermediate Materials 2
3. Batch Rendering
4. Advanced Tools
5. Section Plane Rendering
6. Composite Overlay of Rendering with SketchUp view style
7. Importers and External Proxies
Looking for V1 Tutorials?
Find them here!
AddOn and Specialized Tutorials
In this set we provide tutorials for Twilight Render's AddOn tools and any additional specialized tools or features.
1. Render-to-Texture
2. Terrain Tools
3. FastForward Denoising
Don't forget to download our help guides! Everything you need to know about Twilight Render can be found in one of our online videos or downloadable PDF guides.
Thursday, March 04, 2021
We use cookies for the basic operation of our website and forum. To continue using this site normally, please accept the use of cookies. . | __label__pos | 0.906949 |
Difference between @staticmethod and @classmethod in Python?
Total Post:149
Points:1043
Python
1387 View(s)
Ratings:
Rate this:
What is the difference between a function decorated with @staticmethod and one decorated with @classmethod?
1. Post:397
Points:3117
Re: Difference between @staticmethod and @classmethod in Python?
Maybe a bit of example code will help: Notice the difference in the call signatures of foo, class_foo and static_foo:
class A(object):
def foo(self,x):
print "executing foo(%s,%s)"%(self,x)
@classmethod
def class_foo(cls,x):
print "executing class_foo(%s,%s)"%(cls,x)
@staticmethod
def static_foo(x):
print "executing static_foo(%s)"%x
a=A()
Below is the usual way an object instance calls a method. The object instance, a, is implicitly passed as the first argument.
a.foo(1)
# executing foo(<__main__.A object at 0xb7dbef0c>,1)
With classmethods, the class of the object instance is implicitly passed as the first argument instead of self.
a.class_foo(1)
# executing class_foo(<class '__main__.A'>,1)
You can also call class_foo using the class. In fact, if you define something to be a classmethod, it is probably because you intend to call it from the class rather than from a class instance. A.foo(1) would have raised a TypeError, but A.class_foo(1) works just fine:
A.class_foo(1)
# executing class_foo(<class '__main__.A'>,1)
One use people have found for class methods is to create inheritable alternative constructors.
With staticmethods, neither self (the object instance) nor cls (the class) is implicitly passed as the first argument. They behave like plain functions except that you can call them from an instance or the class:
a.static_foo(1)
# executing static_foo(1)
A.static_foo('hi')
# executing static_foo(hi)
Staticmethods are used to group functions which have some logical connection with a class to the class.
foo is just a function, but when you call a.foo you don't just get the function, you get a "partially applied" version of the function with the object instance a bound as the first argument to the function. foo expects 2 arguments, while a.foo only expects 1 argument.
a is bound to foo. That is what is meant by the term "bound" below:
print(a.foo)
# <bound method A.foo of <__main__.A object at 0xb7d52f0c>>
With a.class_foo, a is not bound to class_foo, rather the class A is bound to class_foo.
print(a.class_foo)
# <bound method type.class_foo of <class '__main__.A'>>
Here, with a staticmethod, even though it is a method, a.static_foo just returns a good 'ole function with no arguments bound. static_foo expects 1 argument, and a.static_foo expects 1 argument too.
print(a.static_foo)
# <function static_foo at 0xb7d479cc>
Answer
NEWSLETTER
Enter your email address here always to be updated. We promise not to spam! | __label__pos | 0.966824 |
精华内容
下载资源
问答
• Python实现列主元高斯消去法与LU分解法 数值分析:Python实现列主元高斯消去法与LU分解法求解线性方程组 一、矩阵形式的线性代数方程组 二、高斯消去法 三、高斯列主元消去法 四、矩阵三角分解法(LU分解) 这里...
Python实现列主元高斯消去法与LU分解法
数值分析:Python实现列主元高斯消去法与LU分解法求解线性方程组
一、矩阵形式的线性代数方程组
在这里插入图片描述
二、高斯消去法
在这里插入图片描述
在这里插入图片描述
三、高斯列主元消去法
在这里插入图片描述
四、矩阵三角分解法(LU分解)
这里只简单介绍Doolittle分解法。
在这里插入图片描述
在这里插入图片描述
题目:编写列主元高斯消去法与LU分解法解线性方程组Ax=b。
在这里插入图片描述
列主元消去法代码实现:
import math
import numpy as np
#目的:熟悉列主元消去法,以及三角分解法等直接求解线性方程组的算法
#列主元消元法
def CME(a,b,x):
isdet0 = 0
m, n = a.shape #矩阵a的行数和列数
# j表示列
for k in range(n - 1): # k表示第一层循环,(0,n-1)行
#在每次计算前,找到最大主元,进行换行
ans = np.fabs(a[k][k])
ik = k
for i in range(k+1, n):
if ans < np.fabs(a[i][k]): # fabs是绝对值,将a中绝
对值最大的找出来
ik = i
ans = np.fabs(a[i][k])
if np.fabs(ans) < 1e-10:
isdet0 = 1
break
if ik != k :
for i in range(k,m):
temp = a[k][i]
a[k][i] = a[ik][i]
a[ik][i] = temp
temp = b[k]
b[k] = b[ik]
b[ik] = temp
for i in range(k + 1, n): # i表示第二层循环,(k+1,n)行,
计算该行消元的系数
temp = a[i][k] / a[k][k] #计算
for j in range(k,m): # j表示列,对每一列进行运算
a[i][j] = a[i][j] - temp * a[k][j]
b[i] = b[i] - temp * b[k]
# 回代求出方程解
if np.fabs(a[n-1][n-1]) < 1e-10 :
isdet0 = 1
if isdet0 == 0:
# x = np.zeros(n)
x[n - 1] = b[n - 1] / a[n - 1][n - 1] #先算最后一位的x解
for i in range(n - 2, -1, -1): #依次回代倒着算每一个解
temp = 0
for j in range(n - 1, i,-1):
temp = temp + a[i][j]*x[j]
x[i] = (b[i]-temp) / a[i][i]
for i in range(n):
print("x" + str(i + 1) + " = ", x[i])
print("x" " = ", x)
if __name__ == '__main__': #当模块被直接运行时,以下代码块将被运行,当模块是被导入时,代码块不被运行。
a = np.array([[3.01, 6.03, 1.99], [1.27, 4.16, -1.23], [0.987, -4.81, 9.34]])
b = np.array([1.0, 1.0, 1.0])
m,n = a.shape
x = np.zeros(n)
B = np.zeros((n, n))
for i in range(n):
for j in range(n):
B[i][j] = a[i][j]
CME(a,b,x)
#验证
for i in range(0, n):
temp = 0
for j in range(0, n):
temp = temp + B[i][j] * x[j]
print("%f ", temp)
if __name__ == '__main__':
main()
LU分解法代码实现:
import math
import numpy as np
#目的:熟悉列主元消去法,以及三角分解法等直接求解线性方程组的算法
#列主元消元法
def CME(a,b,x):
isdet0 = 0
m, n = a.shape #矩阵a的行数和列数
# j表示列
for k in range(n - 1): # k表示第一层循环,(0,n-1)行
#在每次计算前,找到最大主元,进行换行
ans = np.fabs(a[k][k])
ik = k
for i in range(k+1, n):
if ans < np.fabs(a[i][k]): # fabs是绝对值,将a中绝对值最大的找出来
ik = i
ans = np.fabs(a[i][k])
if np.fabs(ans) < 1e-10:
isdet0 = 1
break
if ik != k :
for i in range(k,m):
temp = a[k][i]
a[k][i] = a[ik][i]
a[ik][i] = temp
temp = b[k]
b[k] = b[ik]
b[ik] = temp
for i in range(k + 1, n): # i表示第二层循环,(k+1,n)行,计算该行消元的系数
temp = a[i][k] / a[k][k] #计算
for j in range(k,m): # j表示列,对每一列进行运算
a[i][j] = a[i][j] - temp * a[k][j]
b[i] = b[i] - temp * b[k]
# 回代求出方程解
if np.fabs(a[n-1][n-1]) < 1e-10 :
isdet0 = 1
if isdet0 == 0:
# x = np.zeros(n)
x[n - 1] = b[n - 1] / a[n - 1][n - 1] #先算最后一位的x解
for i in range(n - 2, -1, -1): #依次回代倒着算每一个解
temp = 0
for j in range(n - 1, i,-1):
temp = temp + a[i][j]*x[j]
x[i] = (b[i]-temp) / a[i][i]
for i in range(n):
print("x" + str(i + 1) + " = ", x[i])
print("x" " = ", x)
#三角消元法
def LU(a,b,x):
m, n = a.shape # 矩阵a的行数和列数
y = np.array([0.0, 0.0, 0.0])
for j in range(1,n):# L的第0列
a[j][0] = a[j][0] / a[0][0]
for i in range(1,n-1):# 求U的第i行 L的第i行
for j in range(i,n):#求U的第i行的第j个元素
sum = 0.0 #求和
for s in range(0,i):
sum = sum +a[i][s] * a[s][j]
a[i][j] = a[i][j] - sum
#求L的第i列的第j个元素 在j行i列
for j in range(i+1,n):
sum = 0.0
for s in range(0,i):
sum = sum + a[j][s] * a[s][i]
a[j][i] = ( a[j][i] - sum ) / a[i][i]
#求U[n-1][n-1]
sum = 0.0 #求和
for s in range(0,n-1):
sum = sum + a[n-1][s] * a[s][n-1]
a[n-1][n-1] = a[n-1][n-1] - sum
y[0] = b[0]
for i in range(1,n):
sum = 0.0
for j in range(0,i):
sum = sum + a[i][j] * y[j]
y[i] = b[i] - sum
x[n-1] = y[n-1] / a[n-1][n-1]
for i in range(n-2,-1,-1):#求x[i]
sum = 0.0
for j in range(n-1,i,-1):
sum = sum + a[i][j] * x[j]
x[i] = ( y[i] - sum ) / a[i][i]
for i in range(n):
print("x" + str(i + 1) + " = ", x[i])
print("x" " = ", x)
if __name__ == '__main__': #当模块被直接运行时,以下代码块将被运行,当模块是被导入时,代码块不被运行。
a = np.array([[3.01, 6.03, 1.99], [1.27, 4.16, -1.23], [0.987, -4.81, 9.34]])
b = np.array([1.0, 1.0, 1.0])
m,n = a.shape
x = np.zeros(n)
B = np.zeros((n, n))
for i in range(n):
for j in range(n):
B[i][j] = a[i][j]
# CME(a,b,x)
LU(a,b,x)
#验证
for i in range(0, n):
temp = 0
for j in range(0, n):
temp = temp + B[i][j] * x[j]
print("%f ", temp)
展开全文
• 高斯消元法与LU分解
千次阅读 2019-04-22 18:27:24
1.高斯消去法 有一般方程:Ax=b; 图1 正常思路消元到最后,可以求出,回带之前的方程,可以解出. 图2 但是一旦数据规模大了,那么便很麻烦,很难求解。 ...
1.高斯消去法
有一般方程:Ax=b;
图1
正常思路消元到最后,可以求出 x_{4} ,回带之前的方程,可以解出 x_{3},x_{2},x_{1}.
图2
但是一旦数据规模大了,那么便很麻烦,很难求解。
Gauss消元法的本质是将 矩阵A分解为 L* U .L是一下三角矩阵,U是一上三角矩阵.
A为未知数的初始L^{-1}*A*X=L^{-1}*b矩阵,消元的过程可以用矩阵L来替代,L_{n} *L_{n-1}*L_{n-1}*......*L_{1}*A=A_{n}
消到最后有 A_{n}*X=b_{n}
最后的A_{n}就是图2.
令 L=(L_{n} *L_{n-1}*L_{n-1}*......*L_{1})^{-1}. 则 L^{-1}*A*X=L^{-1}*b
由于最后得到的矩阵 A_{n} 是上三角矩阵,把 A_{n} 记为 U , 可得 U*X=L^{-1}*b, 所以我们只需要求解L矩阵,U矩阵就可以了。
2.LU分解
将系数矩阵A转变成等价两个矩阵L和U的乘积 ,其中L和U分别是单位下三角矩阵和上三角矩阵。当A的所有顺序主子式都不为0时,矩阵A可以分解为A=LU(所有顺序主子式不为0,矩阵不一定不可以进行LU分解)。其中L是下三角矩阵,U是上三角矩阵。
Doolittle直接分解法
例题:
求得的L矩阵,U矩阵
LU分解的代码如下:
% 输入矩阵x=[A,b]; Ax=b;
A=[1 2 3;2 2 8;-3 -10 -2];b=[0;-4;-11];
%x=[A,b];
n=length(A);%A的行数
%LU分解
U=zeros(n);L=eye(n,n);
% 共n次循环,每一次循环先求出U的这一行的全部数据,求出L的这一列的全部数据
for i=1:n
%求出U的这一行的数据
row=i;
for col=row:n
sum=0;
%U第row行第col列
for k=1:row-1
sum=sum+L(row,k)*U(k,col);
end
U(row,col)=A(row,col)-sum;
end
%求出L的这一列的全部数据
col=i;
%第col列第row行
for row=col+1:n
sum=0;
for k=1:col-1
sum=sum+L(row,k)*U(k,col);
end
L(row,col)=(A(row,col)-sum)/U(col,col);
end
end
L
U
3.高斯消元法的回带
L矩阵,U矩阵已经求解完毕,
令 UX=y
Ly=b,求解X。
• 解 Ly=b. 对 i=1来说,y_{1}=b_{1} . 当i>=2, y_{i}=b_{i}-\prod ^{j=i-1} _{j=1}L_{i,j}*y_{i-1}
• 解 Ux=y U的求解不再赘述
解Ly=b,Ux=y的代码如下:
for t=2:n %解Lx=b
b(t)=b(t)-L(t,1:t-1)*b(1:t-1);
end
b(n)=b(n)/U(n,n); %解Ux=b
for t=1:n-1;
k=n-t;b(k)=(b(k)-U(k,k+1:n)*b(k+1:n))/U(k,k);
end
x=b; %方程Ax=b的解即为x
总的代码如下:
clc;clear all;
% 输入矩阵x=[A,b]; Ax=b;
A=[1 2 3;2 2 8;-3 -10 -2];b=[0;-4;-11];
%x=[A,b];
n=length(A);%A的行数
%LU分解
U=zeros(n);L=eye(n,n);
% 共n次循环,每一次循环先求出U的这一行的全部数据,求出L的这一列的全部数据
for i=1:n
%求出U的这一行的数据
row=i;
for col=row:n
sum=0;
%U第row行第col列
for k=1:row-1
sum=sum+L(row,k)*U(k,col);
end
U(row,col)=A(row,col)-sum;
end
%求出L的这一列的全部数据
col=i;
%第col列第row行
for row=col+1:n
sum=0;
for k=1:col-1
sum=sum+L(row,k)*U(k,col);
end
L(row,col)=(A(row,col)-sum)/U(col,col);
end
end
L
U
%LU分解完成
%求U*x=(L^-1)*b
y=zeros(n,1);
y(1)=b(1);
for i=2:n %解Lx=b
y(i)=b(i)-L(i,1:i-1)*y(1:i-1);
end
x=zeros(n,1);
x(n)=y(n)/U(n,n);
%b(n)=b(n)/U(n,n); %解Ux=b
for i=1:n-1;
k=n-i;
x(k)=(y(k)-U(k,k+1:n)*x(k+1:n))/U(k,k);
end
x %方程Ax=b的解即为x
展开全文
• 高斯消去法SSE并行化实验
千次阅读 2018-04-20 21:09:57
高斯消去法原理和伪代码: 高斯消去法LU分解),是线性代数中的一个算法,可用来求解线性方程组,并可以求出矩阵的秩,以及求出可逆方阵的逆矩阵。高斯消元法的原理是:若用初等行变换将增广矩阵化为 ,则AX = B...
高斯消去法原理和伪代码:
高斯消去法(LU分解),是线性代数中的一个算法,可用来求解线性方程组,并可以求出矩阵的秩,以及求出可逆方阵的逆矩阵。高斯消元法的原理是:若用初等行变换将增广矩阵化为 ,则AX = B与CX = D是同解方程组。所以我们可以用初等行变换把增广矩阵转换为行阶梯阵,然后回代求出方程的解。
总结一套流程就是:
原线性方程组——> 高斯消元法——> 下三角或上三角形式的线性方程组——>前向替换算法求解(对于上三角形式,采用后向替换算法)
所以高斯消去法(LU分解)串行算法如下面伪代码所示:
for k := 1 to n do
for j := k to ndo
A[k, j] := A[k, j]/A[k, k];
for i := k + 1to n do
for j := k + 1 to n do
A[i, j] := A[i, j] - A[i, k] × A[k, j ];
A[i, k] := 0;
这其中,内嵌的第一个for循环的作用是把第k行的所有元素除以第一个非零元素,目的是第一个非零元为1
而第二个内嵌的for循环(当然其中还内嵌了一个小的for循环)作用是从k+1行开始减去第k行乘以这一行行的第一个非零元,使得k+1行的第k列为0
SSE/AVX介绍:
Intel ICC和开源的GCC编译器支持的SSE/AVX指令的C接口声明在xmmintrin.hpmmintrin.h头文件中。其数据类型命名主要有__m128/__m256、__m128d/__m256i,默认为单精度(d表示双精度,i表示整型)。其函数的命名可大致分为3个使用“_”隔开的部分,3个部分的含义如下。
第一个部分为_mm或_mm256。_mm表示其为SSE指令,操作的向量长度为64位或128位。_mm256表示AVX指令,操作的向量长度为256位。本节只介绍128位的SSE指令和256位的AVX指令。
第二个部分为操作函数名称,如_add、_load、mul等,一些函数操作会增加修饰符,如loadu表示不对齐到向量长度的存储器访问。
第三个部分为操作的对象名及数据类型,_ps表示操作向量中所有的单精度数据;_pd表示操作向量中所有的双精度数据;_pixx表示操作向量中所有的xx位的有符号整型数据,向量寄存器长度为64位;_epixx表示操作向量中所有的xx位的有符号整型数据,向量寄存器长度为128位;_epuxx表示操作向量中所有的xx位的无符号整型数据,向量寄存器长度为128位;_ss表示只操作向量中第一个单精度数据;si128表示操作向量寄存器中的第一个128位有符号整型。
3个部分组合起来,就形成了一条向量函数,如_mm256_add_ps表示使用256位向量寄存器执行单精度浮点加法运算。由于使用指令级数据并行,因此其粒度非常小,需要使用细粒度的并行算法设计。SSE/AVX指令集对分支的处理能力非常差,而从向量中抽取某些元素数据的代价又非常大,因此不适合含有复杂逻辑的运算。
现在对于接下来代码中要用到的几个SSE指令加以介绍:
_mm_loadu_ps用于packed的加载(下面的都是用于packed的),不要求地址是16字节对齐,对应指令为movups。
_mm_sub_ps(__m128_A,__m128_B);返回一个__m128的寄存器,仅将寄存器_A和寄存器_B最低对应位置的32bit单精度浮点数相乘,其余位置取寄存器_A中的数据,例如_A=(_A0,_A1,_A2,_A3),_B=(_B0,_B1,_B2,_B3),则返回寄存器为r=(_A0*_B0,_A1,_A2,_A3)
_mm_storeu_ps(float *_V, __m128 _A);返回一个__m128的寄存器,Sets the low word to the single-precision,floating-pointValue of b,The upper 3 single-precision,floating-pointvalues are passed throughfrom a,r0=_B0,r1=_A1,r2=_A2,r3=_A3
SSE算法设计与实现:
通过分析高斯的程序可以发现,高斯消去法有两部分可以实现并行,分别是第一部分的除法和第二部分的减法。即:
1.第一个内嵌的for循环里的A[k, j]:= A[k, j]/A[k, k]; 我们可以做除法并行
2.第二个双层for循环里的A[i, j] := A[i, j] - A[i, k] × A[k, j ];我们可以做减法并行
我们来看核心代码
1. 首先没加上并行化的高斯消去法:
float** normal_gaosi(float **matrix) //没加SSE串行的高斯消去法
{
for (int k = 0; k < N; k++)
{
float tmp =matrix[k][k];
for (int j = k; j < N; j++)
{
matrix[k][j] = matrix[k][j] / tmp;
}
for (int i = k + 1; i < N; i++)
{
float tmp2 = matrix[i][k];
for (int j = k + 1; j < N; j++)
{
matrix[i][j] = matrix[i][j] - tmp2 * matrix[k][j];
}
matrix[i][k] = 0;
}
}
return matrix;
}
2. 再来看加上并行化的高斯消去法:
void SSE_gaosi(float **matrix) //加了SSE并行的高斯消去法
{
__m128 t1, t2, t3, t4;
for (int k = 0; k < N; k++)
{
float tmp[4] = { matrix[k][k], matrix[k][k], matrix[k][k], matrix[k][k] };
t1 = _mm_loadu_ps(tmp);
for (int j = N - 4; j >=k; j -= 4) //从后向前每次取四个
{
t2 = _mm_loadu_ps(matrix[k] + j);
t3 = _mm_div_ps(t2, t1);//除法
_mm_storeu_ps(matrix[k] + j, t3);
}
if (k % 4 != (N % 4)) //处理不能被4整除的元素
{
for (int j = k; j % 4 != ( N% 4); j++)
{
matrix[k][j] = matrix[k][j] / tmp[0];
}
}
for (int j = (N % 4) - 1; j>= 0; j--)
{
matrix[k][j] = matrix[k][j] / tmp[0];
}
for (int i = k + 1; i < N; i++)
{
float tmp[4] = { matrix[i][k], matrix[i][k], matrix[i][k], matrix[i][k] };
t1 = _mm_loadu_ps(tmp);
for (int j = N - 4; j >k;j -= 4)
{
t2 = _mm_loadu_ps(matrix[i] + j);
t3 = _mm_loadu_ps(matrix[k] + j);
t4 = _mm_sub_ps(t2,_mm_mul_ps(t1, t3)); //减法
_mm_storeu_ps(matrix[i] + j, t4);
}
for (int j = k + 1; j % 4 !=(N % 4); j++)
{
matrix[i][j] = matrix[i][j] - matrix[i][k] * matrix[k][j];
}
matrix[i][k] = 0;
}
}
}
实验结果分析:
为了测试其性能,我们把矩阵的大小从8,64,512,1024,2048,4096的矩阵进行高斯消去,并将串行所花费的时间与并行所花费的时间进行对比。因为后边矩阵太大我们仅看时间对比即可
1. N=8时,由于数据量较小,所花时间差距并不大。
2. N=64,由于数据量较小,所花时间差距并不大。
3. N=512,这里开始我们看到时间上的变化了。之后随着数据量逐渐增加,并行的优势逐渐体现出来。
4. N=1024
5. N=2048
6. N=4096
总的来说,优势并没有特别大,究其原因,我觉得是因为在做最后并行的步骤之前有很多固定的步骤是需要一定时间的,比如对齐,导致SSE并行的方法需要花时间代价在这上面,没有想象中得那么快。
附上整个代码:
#include<pmmintrin.h>
#include<time.h>
#include<xmmintrin.h>
#include<iostream>
#defineN 4096
usingnamespace std;
float** normal_gaosi(float **matrix) //没加SSE串行的高斯消去法
{
for (int k = 0; k < N; k++)
{
float tmp =matrix[k][k];
for (int j = k; j < N; j++)
{
matrix[k][j] = matrix[k][j] / tmp;
}
for (int i = k + 1; i < N; i++)
{
float tmp2 = matrix[i][k];
for (int j = k + 1; j < N; j++)
{
matrix[i][j] = matrix[i][j] - tmp2 * matrix[k][j];
}
matrix[i][k] = 0;
}
}
returnmatrix;
}
void SSE_gaosi(float **matrix) //加了SSE并行的高斯消去法
{
__m128 t1, t2, t3, t4;
for (int k = 0; k < N; k++)
{
float tmp[4] = { matrix[k][k], matrix[k][k], matrix[k][k], matrix[k][k] };
t1 = _mm_loadu_ps(tmp);
for (int j = N - 4; j >=k; j -= 4) //从后向前每次取四个
{
t2 = _mm_loadu_ps(matrix[k] + j);
t3 = _mm_div_ps(t2, t1);//除法
_mm_storeu_ps(matrix[k] + j, t3);
}
if (k % 4 != (N % 4)) //处理不能被4整除的元素
{
for (int j = k; j % 4 != ( N% 4); j++)
{
matrix[k][j] = matrix[k][j] / tmp[0];
}
}
for (int j = (N % 4) - 1; j>= 0; j--)
{
matrix[k][j] = matrix[k][j] / tmp[0];
}
for (int i = k + 1; i < N; i++)
{
float tmp[4] = { matrix[i][k], matrix[i][k], matrix[i][k], matrix[i][k] };
t1 = _mm_loadu_ps(tmp);
for (int j = N - 4; j >k;j -= 4)
{
t2 = _mm_loadu_ps(matrix[i] + j);
t3 = _mm_loadu_ps(matrix[k] + j);
t4 = _mm_sub_ps(t2,_mm_mul_ps(t1, t3)); //减法
_mm_storeu_ps(matrix[i] + j, t4);
}
for (int j = k + 1; j % 4 !=(N % 4); j++)
{
matrix[i][j] = matrix[i][j] - matrix[i][k] * matrix[k][j];
}
matrix[i][k] = 0;
}
}
}
void print(float **matrix) //输出
{
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
cout << matrix[i][j]<<" ";
}
cout << endl;
}
}
int main()
{
srand((unsigned)time(NULL));
float **matrix = newfloat*[N];
float **matrix2 = newfloat*[N];
for (int i = 0; i<N; i++)
{
matrix[i] = newfloat[N];
matrix2[i] = matrix[i];
}
//cout << "我们生成了初始随机矩阵" << endl;
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
matrix[i][j] = rand() % 100;
}
}
//print(matrix);
cout <<endl<<endl<<endl<<"不使用SSE串行的高斯消去法" << endl;
clock_t clockBegin,clockEnd;
clockBegin = clock(); //开始计时
float **B = normal_gaosi(matrix);
clockEnd = clock();
//print(matrix);
cout << "总共耗时: " << clockEnd - clockBegin << "ms" << endl;
cout <<endl<<endl<<endl<< "使用SSE并行的高斯消去法" << endl;
clockBegin = clock(); //开始计时
SSE_gaosi(matrix2);
clockEnd = clock();
//print(matrix2);
cout << "总共耗时: " << clockEnd - clockBegin << "ms" << endl;
system("pause");
return 0;
}
展开全文
• 提出了基于righ-looking LU分解法的并行高斯消去算法,利用GPU(图形处理器)加速求解复系数回路阻抗方程组;采用GIS(地理信息系统)和虚拟现实技术,对潮流计算结果进行仿真可视化.仿真算例表明,该方法对节点编号无...
• 一般方阵- 高斯消去法与矩阵的LU分解2.可逆矩阵- Doolittle/Crout分解3.分块方阵- 拟LU分解与拟LDU分解 文章目录《矩阵论》学习笔记(四)-1:4.1 矩阵的三角分解一、Gauss消去法的矩阵形式1.1. 引入1.2.矩阵论的...
《矩阵论》学习笔记(四):4.1 矩阵的三角分解
矩阵的三角分解
1.一般方阵- 矩阵的LU/LDU分解
2.可逆方阵- Doolittle/Crout/Gholesky分解
3.分块方阵- 拟LU分解与拟LDU分解
• 提出的目的:
由于三角矩阵的计算,像行列式、逆矩阵、求解线性方程组等都是很方便的,所以考虑将普通矩阵分解成一些三角矩阵的乘积,从而简化运算。
一、Gauss消去法的矩阵形式
1.1. 引入
对n元线性方程组Ax=wA\vec x=\vec w,求解通常采用高斯主元素消去法— 将增广矩阵进行行阶梯化简成三角矩阵。
线性方程组Ax=wA\vec x=\vec w 增广矩阵 高斯主元素消去法
{a11x1+a12x2+a13x3=w1a21x1+a22x2+a23x3=w2a31x1+a32x2+a33x3=w3\begin{cases}a_{11}x1+a_{12}x2+a_{13}x3=w1\\a_{21}x1+a_{22}x2+a_{23}x3=w2\\a_{31}x1+a_{32}x2+a_{33}x3=w3\end{cases} [a11a12a13w1a21a22a23w2a31a32a33w3]\left[ \begin{array}{cccc} a_{11} & a_{12} & a_{13}& w_1 \\ a_{21} & a_{22}& a_{23} & w_2 \\ a_{31}& a_{32}& a_{33} & w_3 \end{array} \right] [a11a12a13z10a22a23z200a33z3]\left[ \begin{array}{cccc} a_{11}'&a_{12}'&a_{13}' &z_1' \\0&a_{22}' &a_{23}'& z_2' \\ 0&0 &a_{33}' & z_3'\end{array} \right]
1.2. 矩阵论中的三角分解理论
为建立矩阵论的三角分解理论,使用矩阵理论描述高斯主元素消去法的消元过程。
- 高斯消元过程:
对矩阵A采用按自然顺序选主元素法进行消元。
A(0)=AA^{(0)}=A,由于倍加初等变换不改变矩阵行列式的值,不断使用A的k阶顺序主子式k△k,构造Frobenius矩阵LkL_k,最终得到矩阵A(n1)A^{(n-1)}.
A(n1)=[a11(0)a12(0)...a1n(0)a22(1)...a2n(1)...ann(n1)]A^{(n-1)}=\left[ \begin{array}{cccc} a_{11}^{(0)} & a_{12}^{(0)} &...& a_{1n}^{(0)} \\ & a_{22}^{(1)}& ... & a_{2n}^{(1)} \\ & &...\\ & & &a_{nn}^{(n-1)} \end{array} \right]
-高斯消元过程 几个要点
1- 过程中的特点 不使用行/列变换.
2- 能进行到底的条件 前n-1个顺序主子式均不为0.
3- 结果的特点 矩阵A高斯消元过程得到的LU分解是存在且唯一。
其中,L是单位下三角阵,U是上三角阵。
二、 矩阵的LU/LDU分解
A=A(0)=L1A(1)=...=L1L2...Ln1A(n1)A=A^{(0)}=L_1A^{(1)}=...=L_1L_2...L_{n-1}A^{(n-1)}得到以下:
2.1. LU/LDU分解公式
– - 矩阵的LU分解公式:
> A=LUA=LU.
其中,L为下三角矩阵,U为下三角矩阵。L=L1L2...Ln1L=L_1L_2...L_{n-1}U=A(n1)=[a11(0)a12(0)...a1n(0)a22(1)...a2n(1)...ann(n1)]U=A^{(n-1)}=\left[\begin{array}{cccc} a_{11}^{(0)} & a_{12}^{(0)} &...& a_{1n}^{(0)} \\& a_{22}^{(1)}& ... & a_{2n}^{(1)} \\ & &...\\ & &&a_{nn}^{(n-1)} \end{array} \right](能够将一个矩阵表示成一个上三角矩阵×下三角矩阵的形式,简化计算)
– - 矩阵的LDU分解公式:
> A=LDUA=LDU.
其中,L为单位下三角矩阵,D为对角矩阵,U为单位下三角矩阵。
2.2. 三角分解的存在性与唯一性
先要满足‘存在’,讨论 ‘是否是唯一’ 才有意义。
- 存在性 唯一性
奇异矩阵 1- 若满足前n-1个顺序主子式≠0 \Rightarrow 三角分解存在 [必要条件]
2- 若三角分解存在,不一定满足前n-1个顺序主子式≠0。
3-三角分解不存在
- ( 奇异 det(A)=0\Rightarrow det(A)=0ann=0a_{nn}=0 )
1- 若三角分解存在且满足前n-1个顺序主子式≠0
\Rightarrow A=LU唯一;A=LDU唯一
2- 若三角分解存在但不满足前n-1个顺序主子式≠0
\Rightarrow A=LU/LDU不一定唯一
3- 无
非奇异矩阵 1- 若满足n-1个顺序主子式≠0 \Leftrightarrow 三角分解存在 [充要条件]。
[实际上,非奇异矩阵的anna_{nn}≠0,n个主子式都≠0.]
2- 若三角分解不存在,即不满足n个顺序主子式≠0
\Leftrightarrow 存在P使PA的所有顺序主子式≠0。即A虽不存在,PA存在LU分解。 [条件更强(只满足前n-1个就行),但这是非奇异本身的性质]
1- 三角分解存在(A的前(n-1)阶顺序主子式≠0)
\Leftrightarrow A=LU唯一;A=LDU唯一
2- 三角分解不存在
\Leftrightarrow PA=LU唯一; PA=LDU唯一
• 前(n-1)阶顺序主子式与唯一性
前(n-1)阶顺序主子式 存在性与唯一性
1- 均≠0 \Leftrightarrow A=LU/LDU存在且唯一,Doolittle/Crout/Gholesky分解唯一 [充要条件]
2- 存在0 \Rightarrow A=LU/LUD不一定存在;
若存在,也不一定唯一,可能有多种分解方式。在这里插入图片描述
- 对以上的理解:
这里所研究的方阵A的三角分解是从Gauss消元法得出的。根据该过程:
1. 只要求前n-1个顺序主子式不为零,总可以得到LU分解(这里L是单位下三角阵,U是上三角阵),且这个分解唯一。
而没要求最后第n阶顺序主子式为0(即det(A)可能为0,对应anna_{nn}=0),所以A可能是奇异的。若奇异,且LU存在,U的最右下角应为0。这个时候Gauss消元三角分解已经结束,即只进行到A(n-1)。
2. 这里引出了三角分解的概念,同时引入一个问题— A=LU是否唯一。
显然因为A=LDD1UA=LDD^{-1}U, LU不唯一。
但自然顺序选主元下的Gauss消元法,执行过程要求前n-1个顺序主子式为零,即可导出唯一的LU分解(L是单位下三角阵,U是上三角阵),且这个分解可以写成LDU的形式(即把原U的对角元素取出作为对角阵D即可,相应的U变成单位上三角阵)。
3. 从而引出定理-
前n-1个顺序主子式不为0 当且仅当(“等价于”) A可唯一的分解为LDU】。
但如果前n-1个顺序主子式中出现0,这样三角分解不存在或有多个。
4. 推论:对于非奇异的情形,存在一般的LU分解即等价于所有顺序主子式为0。
但要注意:要求顺序主子式为0的条件比较强。—> 对于非奇异的矩阵A,是存在P使PA的所有顺序主子式≠0。即A虽然没有,PA是有LU分解的。
5. 对于奇异矩阵A也可以类似考虑置换矩阵来考察它是否满足前n-1个顺序主子式是否都为0。这其实对应着按列选主元(只有行置换),还是总体选主元(行列都可置换)。这样原矩阵A不能三角分解,但其置换后的矩阵可能可以进行LU分解。
6. 一般的三角分解可能是不存在的,也可能是不唯一的。
但按自然Gauss消元法,自然限定一边是单位三角矩阵,且因满足定理4.1,所以可以唯一确定。
从这个意义上,上面探讨的三角分解条件可以推出一般的三角分解,反之则不然。
但是结合初等变换与三角分解,仍可对一个秩为r的矩阵,进行类似计算。
2.3. 三角分解的求解方法
参见:[《线性代数及其应用》2.5]
对可逆方阵A,若A可以化成阶梯型矩阵U,且简化过程仅使用行倍加变换:
1. 存在单位下三角初等矩阵E1,...,EpE_1,...,E_p使得:Ep...E1A=UA=(Ep...E1)1UE_p...E_1*A=U \to A=(E_p...E_1)^{-1}U
2. L=(Ep...E1)1L=(E_p...E_1)^{-1},满足(Ep...E1)L=I(E_p...E_1)*L=I
- 即:AA左乘行变换U\to ULL左乘行变换I\to I
- 三角分解的求解步骤 :
1. AUAmnA \to U:A_{m*n} 经过初等行变换(无行交换)\toUmnU_{m*n},其中U是行阶梯矩阵.
2. ULUmnU \to L:对U_{m*n} 确定主元列,每列元素除以列主元Lmm\to L_{m*m},其中L是单位下三角方阵.
若主元列r<m,则用ImI_m补齐L的后(m-r)列.
三、可逆矩阵的三角分解
3.1. Doolittle/Crout分解的前提条件
A是非奇异/可逆矩阵,且假定A的LDU分解存在。
(这样A=LDU,D对角元素一定是非零的。如果按自然Gauss消元法,D最后一个元素可为0.)
可逆矩阵A有特殊LU分解方法 分解公式
1- Doolittle分解算法 A=L(DU)=LUA=L(DU)=LU'
2- Crout分解算法 A=(LD)U=LUA=(LD)U=L'U
3- Gholesky分解算法 A=GGTA=GG^T
3.2. Doolittle分解
A=L(DU)=LUA=L(DU)=LU',其中U=DUU'=DU
L为对角元素为1的下三角矩阵,U’为上三角矩阵。
L=[1l211...ln1ln2...1]U=[u11u12...l1nu22...u2n...unn]L'=\left[\begin{array}{cccc} 1& & & \\ l_{21}&1& & \\ &...\\ l_{n1} &l_{n2}&...&1\end{array} \right],U=\left[\begin{array}{cccc} u_{11} &u_{12} & ...&l_{1n} \\ & u_{22} &... &u_{2n} \\ &&...\\ & & &u_{nn} \end{array} \right],
为了节省存储空间,将L和U’中元素写入矩阵A中相应位置上,得到如下矩阵:[l11u12...l1nl21l22...u2n...ln1ln2...lnn] \to \left[\begin{array}{cccc} l_{11}&u_{12} & ...&l_{1n} \\l_{21}& l_{22}&... &u_{2n} \\ &&...\\ l_{n1} &l_{n2}&...&l_{nn}\end{array} \right]
3.3. Crout分解
A=(LD)U=LUA=(LD)U=L'U,其中L=LDL'=LD
L’为下三角矩阵,U为对角元素为1的上三角矩阵。
L=[l11l21l22...ln1ln2...lnn]U=[1u12...l1n1...u2n...1]L'=\left[\begin{array}{cccc} l_{11}& & & \\ l_{21}& l_{22}& & \\ &...\\ l_{n1} &l_{n2}&...&l_{nn}\end{array} \right],U=\left[\begin{array}{cccc} 1&u_{12} & ...&l_{1n} \\ & 1&... &u_{2n} \\ &&...\\ & & &1\end{array} \right],
为节省存储空间,将L’和U中元素写入矩阵A中相应位置上,得到如下矩阵:[l11u12...l1nl21l22...u2n...ln1ln2...lnn] \to \left[\begin{array}{cccc} l_{11}&u_{12} & ...&l_{1n} \\l_{21}& l_{22}&... &u_{2n} \\ &&...\\ l_{n1} &l_{n2}&...&l_{nn}\end{array} \right]
• Crout分解算法步骤:
L’的列 与 U的行 交替计算 (见课本p135)。
3.4. Gholesky分解
如果A不但是非奇异/可逆矩阵,A的LDU分解存在,而且是实对称正定矩阵,则有:
A=ATLDU=UTDLTU=LTL=UTA=A^T、LDU=U^TDL^T \to U=L^T、L=U^T,从而:
A=LDU=L(D)2U=(LD)(LD)T=GGTA=LDU=L(D')^2U=(LD')(LD')^T=GG^T,其中,G=LDG=LD'是下三角矩阵,D=(D)D'=\sqrt{(D)}.
四、分块矩阵的拟LU分解与拟LDU分解
• 提出的目的:
对于高阶方阵,如果能将其分解成拟三角矩阵和拟对角矩阵,将会极大的减少计算量。
对矩阵ARnnA∈R^{n*n},将A裂分成:
A=[A11A12A21A22] A= \left[\begin{array}{cccc}A_{11}&A_{12} \\A_{21}& A_{22}\end{array} \right]
1. 如果A11A_{11}是可逆的,可构造下三角阵,使得A能拟LU分解与拟LDU分解。
det(A)=det(A11)det(A22A21A111A12)0\to det(A)=det(A_{11})det(A_{22}-A_{21}A_{11}^{-1}A_{12}) ≠0
2. 如果A22A_{22}是可逆的,可构造下三角阵,使得A能拟LU分解与拟LDU分解。
det(A)=det(A22)det(A11A12A221A21)0\to det(A)=det(A_{22})det(A_{11}-A_{12}A_{22}^{-1}A_{21}) ≠0
• 推出结论:
如果矩阵 ARmnBRnmA∈R^{m*n},B∈R^{n*m},则有:
det(Im+ABT)=det(In+BTA)det(I_m+AB^T)=det(I_n+B^TA)
五、三角分解的应用
展开全文
• 第6章 变治法预排序:用于下列场景,检验数组中元素的唯一性,...LU分解高斯消去法得到的上三角阵为U,在消去过程中行的乘数构成了下三角阵L(假设不存在行交换的情况下)。比如,在消去第i列的时候,把第i行乘以...
• 高斯消去法 rref():将一个线性方程式的增广矩阵转化为除了对角线元素为1外,其余元素均为0 >> A=[1 2 1; 2 6 1; 1 1 4]; >> b=[2;7;3]; >> R=rref([A b]) R = 1 0 0 -3 0 1 0 2 0 0 1 1 *...
• MATLAB LU函数
2013-11-20 20:18:00
高质量学习资源免费获取,专注但不限于【Linux】【C/C++/Qt... 高斯消元求解线性方程,包括把增广矩阵转换为三角矩阵形式的过程,消去阶段工作 步骤是把矩阵A分解成为下三角L和上三角U的乘积。这种...
• 该教程包括LU分解和Cholesky分解的定义,Cholesky分解的条件,使用Numpy特征值函数测试正定性,Cholesky算法的派生和Python中的编码。 高斯-乔丹方法教程-逐步理论编码 在本教程中,使用符号和数字示例逐步说明了...
• GaussElimination.m实现了高斯消去法求解方程组 L_GaussElimination.m实现了带列主元的高斯消去法求解方程组 LU.m实现了LU分解 LUP.m实现了带列主元的LU分解 reverse_and_det.m实现了利用LU分解求矩阵的逆和行列式 ...
• 高斯消去法改写为紧凑形式,可以直接从矩阵A的元素计算出L,U元素的递推公式,因此可以使A分解为L和U。利用LU分解法,求解Ax=b等价于求解Ly=b和Ux=y。 我定义了一个创建希尔伯特矩阵的函数,通过输入行数(列数)来...
• 高斯消去法rref() R = rref(A) 使用 Gauss-Jordan 消元法和部分主元消元法返回A的简化行阶梯形。 对增广矩阵 [A b] 使用rref()则可以求解 Ax=b 对应的线性方程组 LU因子化 [L,U,P] = lu(A) 将满矩阵或稀疏矩阵 A ...
• 通常的逆矩阵可以用高斯消去法计算。十分有效。还可以使用LU分解,QR分解等。 二扩域中的逆矩阵则不同。看似简单,其实有别:它的所有元素定义在GF(2^m)中。从理论来看,似乎也可以用高斯消去法,只是计算规则 ...
• 4.2 矩阵的LU分解法 4.3 特殊线性方程组的三角分解法 4.4 向量范数及矩阵范数 4.5 方程组的性态误差分析 习题四 第5章 解线性方程组的迭代法 5.1 常用迭代法 5.2 迭代法的收敛性 习题五 第6章 非线性方程求根 6.1 ...
• 6.3 LU分解法 6.4 两类特殊矩阵方程 本章总结 习题6 计算实习6 第7章 线性方程组的迭代解法 7.1 迭代法的原理 7.2 古典迭代法及其收敛性 7.3 共轭梯度法 本章总结 习题7 计算实习7 第8章 矩阵特征值问题...
• 算法设计分析基础
2018-02-12 19:26:04
6.2高斯消去法 6.2.1LU分解 6.2.2计算矩阵的逆 6.2.3计算矩阵的行列式 习题6.2 6.3平衡查找树 6.3.1AVL树 6.3.22—3树 习题6.3 6.4堆和堆排序 6.4.1堆的概念 6.4.2堆排序 习题6.4 6.5霍纳法则和二进制幂 6.5.1霍纳法...
• 线性代数 | 复习笔记
2020-07-26 18:36:49
Schwarz 不等式三角不等式2 矩阵线性方程组对矩阵向量乘积的理解对线性方程组的理解可逆矩阵线性方程组的行图和列图3 高斯消元矩阵的初等行变换增广矩阵消去矩阵置换阵4 矩阵的运算矩阵乘法的性质分块矩阵矩阵...
• 该部分就是针对线性方程组求解而设计的,内容包括:线性方程组的直接解法:Gauss消去法、Gauss列主元消去法、Gauss全主元消去法、列主元消去法应用『列主元求逆矩阵、列主元求行列式、矩阵的三角分解』、LU分解法、...
• 数学算法原书光盘
2012-05-11 17:38:03
2.LU分解法 3.追赶法 4.五对角线性方程组解法 5.线性方程组解的迭代改善 6.范德蒙方程组解法 7.托伯利兹方程组解法 8.奇异值分解 9.线性方程组的共轭梯度法 10.对称方程组的乔列斯基分解法 11.矩阵的QR分解 12.松弛...
• 2.LU分解法 3.追赶法 4.五对角线性方程组解法 5.线性方程组解的迭代改善 6.范德蒙方程组解法 7.托伯利兹方程组解法 8.奇异值分解 9.线性方程组的共轭梯度法 10.对称方程组的乔列斯基分解法 11.矩阵的QR分解 12.松弛...
• 算法经典VC++数值分析
热门讨论 2010-10-21 21:03:49
1.2LU分解法 1.3追赶法 1.4五对角线性方程组解法 1.5线性方程组解的迭代改善 1.6范德蒙(Vandermonde)方程组解法 1.7托伯利兹(Toeplitz)方程组解法 1.8奇异值分解 1.9线性方程组的共轭梯度法 1.1对称方程组...
• delphi算法集源码
2012-07-24 09:46:04
2.LU分解法 3.追赶法 4.五对角线性方程组解法 5.线性方程组解的迭代改善 6.范德蒙方程组解法 7.托伯利兹方程组解法 8.奇异值分解 9.线性方程组的共轭梯度法 10.对称方程组的乔列斯基分解法 11.矩阵的QR分解...
• Visual C++ 常用数值算法集
热门讨论 2012-03-19 11:57:59
1.2LU分解法 1.3追赶法 1.4五对角线性方程组解法 1.5线性方程组解的迭代改善 1.6范德蒙(Vandermonde)方程组解法 1.7托伯利兹(Toeplitz)方程组解法 1.8奇异值分解 1.9线性方程组的共轭梯度法 1.1对称方程组...
• 1.全主元高斯约当消去法2.LU分解法3.追赶法4.五对角线性方程组解法5.线性方程组解的迭代改善6.范德蒙方程组解法7.托伯利兹方程组解法8.奇异值分解9.线性方程组的共轭梯度法10.对称方程组的乔列斯基分解法11.矩阵的QR...
• 2.LU分解法 3.追赶法 4.五对角线性方程组解法 5.线性方程组解的迭代改善 6.范德蒙方程组解法 7.托伯利兹方程组解法 8.奇异值分解 9.线性方程组的共轭梯度法 10.对称方程组的乔列斯基分解法 11.矩阵的QR分解...
• 该部分就是针对线性方程组求解而设计的,内容包括:线性方程组的直接解法:Gauss消去法、Gauss列主元消去法、Gauss全主元消去法、列主元消去法应用『列主元求逆矩阵、列主元求行列式、矩阵的三角分解』、LU分解法、...
空空如也
空空如也
1 2
收藏数 33
精华内容 13
关键字:
lu分解法与高斯消去法 | __label__pos | 0.953554 |
Properties
Label 690.2.j
Level $690$
Weight $2$
Character orbit 690.j
Rep. character $\chi_{690}(367,\cdot)$
Character field $\Q(\zeta_{4})$
Dimension $48$
Newform subspaces $2$
Sturm bound $288$
Trace bound $6$
Related objects
Downloads
Learn more
Defining parameters
Level: \( N \) \(=\) \( 690 = 2 \cdot 3 \cdot 5 \cdot 23 \)
Weight: \( k \) \(=\) \( 2 \)
Character orbit: \([\chi]\) \(=\) 690.j (of order \(4\) and degree \(2\))
Character conductor: \(\operatorname{cond}(\chi)\) \(=\) \( 115 \)
Character field: \(\Q(i)\)
Newform subspaces: \( 2 \)
Sturm bound: \(288\)
Trace bound: \(6\)
Distinguishing \(T_p\): \(7\)
Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(690, [\chi])\).
Total New Old
Modular forms 304 48 256
Cusp forms 272 48 224
Eisenstein series 32 0 32
Trace form
\( 48q + O(q^{10}) \) \( 48q + 16q^{13} - 48q^{16} + 8q^{23} - 32q^{25} - 16q^{26} + 32q^{31} - 16q^{35} + 48q^{36} + 32q^{47} + 16q^{50} + 16q^{52} - 32q^{55} - 64q^{62} + 48q^{71} - 64q^{73} + 32q^{75} + 16q^{77} + 16q^{78} - 48q^{81} + 48q^{82} - 48q^{85} + 32q^{87} + 8q^{92} + 48q^{93} + 48q^{95} - 32q^{98} + O(q^{100}) \)
Decomposition of \(S_{2}^{\mathrm{new}}(690, [\chi])\) into newform subspaces
Label Dim. \(A\) Field CM Traces $q$-expansion
\(a_2\) \(a_3\) \(a_5\) \(a_7\)
690.2.j.a \(24\) \(5.510\) None \(0\) \(0\) \(0\) \(0\)
690.2.j.b \(24\) \(5.510\) None \(0\) \(0\) \(0\) \(0\)
Decomposition of \(S_{2}^{\mathrm{old}}(690, [\chi])\) into lower level spaces
\( S_{2}^{\mathrm{old}}(690, [\chi]) \cong \) \(S_{2}^{\mathrm{new}}(115, [\chi])\)\(^{\oplus 4}\)\(\oplus\)\(S_{2}^{\mathrm{new}}(230, [\chi])\)\(^{\oplus 2}\)\(\oplus\)\(S_{2}^{\mathrm{new}}(345, [\chi])\)\(^{\oplus 2}\) | __label__pos | 0.995623 |
Distribution of the sum of three random variables
• Thread starter TeXfreak
• Start date
• #1
1
0
Hi everyone. I have this problem. Given three random variables X, Y, Z with joint pdf (probability density function)
f(x,y,z)=\exp(-(x+y+z)) if x>0, y>0, z>0; 0 elsewhere
find the pdf of U (f_U), where U is the random variable given by U=(X+Y+Z)/3.
Now I know how to find the joint pdf of a random vector of equal dimension as that of the original vector (via the Jacobian of the inverse transformation, that is, when the transformation is from R^n to R^n, but in this case it is from R^3 to R), or how to find the pdf of the sum of two independent random variables (via the convolution of the two pdfs), but I can't figure out how to do this one.
One could set the transformation to be g : R^3 \to R^3 defined by g(x,y,z)=((x+y+z)/3,y,z) (though I am not sure whether that would be right), so as to find the pdf of g(X,Y,Z) and then find the marginal density function of U, but then the integral does not converge.
And trying convolutions, something like f_U = f_X * (f_Y * f_Z) ---and here I am less sure if it's ok--- the integral doesn't converge either.
Could anybody can help me with this problem, please? Thanks in advance.
Answers and Replies
• #2
315
1
First find the cumulative distribution function, F_U(u) = P((X+Y+Z)/3 < u), by integrating the joint density function f(x,y,z) over the tetrahedron satisfying x > 0, y > 0, z > 0, (x+y+z)/3 < u. Then differentiate F_U to get the density function for U.
• #3
gel
533
5
A general method I often use is as follows. The probability density function fU of a random variable U is defined by the following expression for the expected value of h(U), for any function h
[tex]
E[h(U)] = \int f_U(u) h(u)\,du.
[/tex]
Just substitute in U=(X+Y+Z)/3
[tex]
\int f_U(u) h(u)\,du=E[h((X+Y+Z)/3)]=\int_0^\infty \int_0^\infty\int_0^\infty f(x,y,z)h((x+y+z)/3) \,dx\,dy\,dz
[/tex]
change variables x = 3u - y - z in the inner integral
[tex]
\int f_U(u) h(u)\,du=\int_0^\infty \int_0^\infty\int_{(y+z)/3}^\infty f(3u-y-z,y,z)h(u) 3\,du\,dy\,dz
[/tex]
Change the order of integration
[tex]
\int f_U(u) h(u)\,du=\int_0^\infty\int_0^{3u} \int_0^{3u-z} f(3u-y-z,y,z)h(u) 3\,dy\,dz\,du
[/tex]
from which you can read off the density
[tex]
f_U(u)=3\int_0^{3u} \int_0^{3u-z} f(3u-y-z,y,z) \,dy\,dz
[/tex]
This is virtually the same as calculating the cumulative distribution (by taking [itex]g(u)=1_{\{u>K\}}[/itex]), but without the differentiation step to convert to the density at the end. So, whichever method you prefer.
Related Threads on Distribution of the sum of three random variables
Replies
4
Views
104
Replies
15
Views
4K
Replies
1
Views
2K
Replies
8
Views
26K
Replies
15
Views
1K
• Last Post
Replies
4
Views
2K
• Last Post
Replies
4
Views
4K
Replies
5
Views
2K
Replies
10
Views
4K
Replies
5
Views
6K
Top | __label__pos | 0.909102 |
Is delegate an observer pattern?
Is delegate an observer pattern?
It’s called the ‘Observer Pattern’, not ‘Observable Pattern’. And technically speaking, a delegate conforms to the Observer Pattern. It’s an invokable object reference that a subject can notify. Subject is your class that has an event/delegate.
What is the difference between delegate and observer?
The observer pattern is already implemented for you in the form of events. The advantage of events is that they can have multiple subscribers, while with a delegate, you can only have one.
Are events observer pattern?
Events are something like a “built-in” observer pattern template for some languages. Thus, you wouldn’t really implement the observer pattern in a language which supports events as they already provide what you are looking for.
What is the disadvantage of observer design pattern?
Disadvantages of Observer Design Pattern The main disadvantage of the observer design pattern that subscribers are notified in random order. There is also a memory leakage problem in the observer design pattern because of the observer’s explicit register and unregistering.
What is observable design pattern?
The observer pattern is a software design pattern in which an object, named the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods.
What is delegate and notification in Swift?
use delegates when you want the receiving object to influence an action that will happen to the sending object. use notifications when you need to inform multiple objects of an event.
Could you explain what is the difference between delegate and KVO?
KVO is useful to listen “without the class knowing”, although of course that’s not the case, the class on which KVO is applied does not need to be changed. Show activity on this post. Delegation is a design pattern that you use when you want some other object to modify the sender’s behavior.
Why do we use observer pattern?
Observer pattern is used when there is one-to-many relationship between objects such as if one object is modified, its depenedent objects are to be notified automatically. Observer pattern falls under behavioral pattern category.
What are the consequences of observer pattern?
Consequences. The Observer pattern lets you vary subjects and observers independently. You can reuse subjects without reusing their observers, and vice versa. It lets you add observers without modifying the subject or other observers.
Is Observer pattern obsolete?
Ans: The Observable class and the Observer interface have been deprecated in Java 9 because the event model supported by Observer and Observable is quite limited, the order of notifications delivered by Observable is unspecified, and state changes are not in one-for-one correspondence with notifications.
What are the two consequences of using the Observer pattern?
When should we use Observer pattern?
Why do we need Observer pattern?
The Observer Pattern is an appropriate design pattern to apply in any situation where you have several objects which are dependent on another object and are required to perform an action when the state of that object changes, or an object needs to notify others without knowing who they are or how many there are.
What is the difference between delegate and notifications?
What is difference between closure and delegate Swift?
Difference between protocol Delegate and Closures – As we see both method’s functionality are the same but there is a difference – In the protocol, we can make more than one function but in Closures, it is a self-contained block of functionality.
What is the difference between a delegate and an Nsnotification?
Delegate is passing message from one object to other object. It is like one to one communication while nsnotification is like passing message to multiple objects at the same time. All other objects that have subscribed to that notification or acting observers to that notification can or can’t respond to that event.
Why is Observer deprecated?
What can I use instead of Observer pattern?
There are many alternatives of Observer design pattern and Reactive Streams is one of them. Reactive Streams or Flow API: Flow is a class introduced in Java 9 and has 4 interrelated interfaces : Processor , Publisher , Subscriber and Subscription . Flow.
What are the consequences of Observer pattern?
What are the two consequences of using the observer pattern?
Related Posts | __label__pos | 1 |
Skip to content
Instantly share code, notes, and snippets.
Embed
What would you like to do?
Revealing module pattern
if (typeof DOJ == 'undefined') {
DOJ = {};
}
// http://toddmotto.com/mastering-the-module-pattern/
DOJ = function () {
var _myPrivateVar = "_myPrivateVar can be accessed only from within DOJ.";
var myPublicProperty = "myPublicProperty is accessible as DOJ.myPublicProperty.";
var _myPrivateMethod = function () {
console.log("_myPrivateMethod can be accessed only from within DOJ.");
return "Returned from _myPrivateMethod";
};
var myPublicMethod = function () {
console.log("myPublicMethod is accessible as DOJ.myPublicMethod.");
//Within myProject, I can access "private" vars and methods:
console.log(_myPrivateVar);
console.log(_myPrivateMethod());
//The native scope of myPublicMethod is DOJ; we can
//access public members using "this":
console.log(this.myPublicProperty);
};
// return public properties and methods
return {
myPublicProperty: myPublicProperty,
myPublicMethod: myPublicMethod
};
}();
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time. | __label__pos | 0.921185 |
Write step operations with input validation, type casting, and error handling.
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
lib
test
.gitignore
LICENSE
README.md
mix.exs
mix.lock
README.md
Build Status
Arpeggiate
Write step operations with input validation, type casting, and error handling.
Steps
Quite often in software you want to perform a series of steps towards an end result while gracefully handling error conditions. This is the goal of arpeggiate. With arpeggiate, you can define any number of steps to perform. Each step hands its state to the next step in a functional way using {:ok, state}. Error handlers can be defined for each step to facilitate cleanup or other such error-handling activities. Triggering the error state is as simple as returning an {:error, state} tuple. Error handlers are optional. Steps without error handlers will proceed to the next step regardless of their outcome.
Generally you want to work with step state, but occasionally directly accessing the raw params as passed into the operation is useful. As such, steps may be either arity 1 (state) or, if you require access to the params in a step or error handler, arity 2 (state, params) is also supported.
Schema
Arpeggiate leverages many of the robust casting and validation features of the Ecto project to process each operation's input parameters. Arpeggiate's operation state is an Ecto.Changeset, which provides a familiar interface for changing the state and handling error messages.
The schema of the operation state is defined by passing a schema block.
Loading
Typically input parameters are cast to the state struct and validation is optionally run. We do this by defining a load method that receives the params and converts it into state.
Processing
To run the operation, we call process with the input parameters. If the whole operation succeeds, an {:ok, state} tuple is returned, with state being the state returned by the last step. In the error case, the step sequence is halted, an {:error, state, validation_step} tuple is returned, with state being the state returned by the failing step's error handler and validation_step being the name of the step that failed (represented as an atom).
If you need validation, call validate right away and pass the result to the steps. If you don't need validation, you can call step directly.
Example
Let's say we want to take some money from Sam in exchange for baking him a pie. If baking the pie fails, we want to clean up by sending Sam a refund and an apology email.
defmodule PayForPie do
use Arpeggiate
schema do
field :email, :string
field :pie_type, :string
field :credit_card_number, :integer
# let's say we have a Payment struct defined in our app, we can cast the
# result of payment into a Payment struct using Ecto embedding
embeds_one :payment, Payment
end
load fn params ->
# you can use any Ecto validation you want here, including any custom
# validators you have written
params_to_struct(params)
|> cast(params, [:email, :pie_type, :credit_card_number])
|> validate_required([:email, :pie_type, :credit_card_number])
end
def process(params) do
validate()
|> step(:run_credit_card, error: :payment_failed)
|> step(:bake_pie, error: :baking_failed)
|> run(params)
end
# --- step 1
# steps can be defined with arity 1 or arity 2, taking either (state, params)
# arguments, or just (state) if params aren't needed for the particular step
def run_credit_card(state, params) do
{status, payment} = CreditCard.charge(state.credit_card_number)
# if the result of CreditCard.charge is an {:ok, payment} tuple, the
# operation will continue to the next step with the updated state. if the
# result is an {:error, payment} tuple, the operation will halt and run the
# error handler specified for the step. in either case we want to cast the
# payment into the state
{status, state |> cast_embed(:payment, payment)}
end
def payment_failed(state) do
# no need for a status tuple since this is always an error condition
:payment_failed
end
# --- step 2
# here's an example of an arity 1 step where params aren't used
def bake_pie(state) do
# if Pie.bake returns an {:ok, pie} tuple, the operation will return the
# pie. if Pie.bake returns an {:error, _something_else} tuple, the
# operation will run the error handler specified for the step
Pie.bake(state.pie_type)
end
def baking_failed(state) do
{:ok, refund} = CreditCard.refund(payment.id)
{:ok, email} = Mailer.send_apology(state.email)
{:error, state}
end
end
With this operation, you'd call it like so:
case PayForPie.process(
%{
"email" => "[email protected]",
"pie_type" => "cherry",
"credit_card_number" => "4242424242424242"
}
) do
{:ok, pie} ->
# everything succeeded!
{:error, :payment_failed, :run_credit_card} ->
# uh oh
end | __label__pos | 0.833076 |
0 votes
just to ask. rather than post in the forum just now, because i think that this can be updated better.
but doing a simple 4 button movement, up, down , left , right seems to be not working and dont know why. its odd.
tried this type of code in other platforms and its fine.
it works with the last movement code, so left/right is fine, but up/down, nope. not working
here is my code
func _fixed_process(delta):
LEFT_BTN = Input.is_action_pressed('LEFT_BTN');
RIGHT_BTN = Input.is_action_pressed('RIGHT_BTN');
UP_BTN = Input.is_action_pressed('UP_BTN');
DOWN_BTN = Input.is_action_pressed('DOWN_BTN');
if UP_BTN:
self.set_linear_velocity(Vector2(0, -player_speed));
elif DOWN_BTN:
self.set_linear_velocity(Vector2(0, player_speed));
else:
self.set_linear_velocity(Vector2(0, 0));
if LEFT_BTN:
self.set_linear_velocity(Vector2(-player_speed, 0));
elif RIGHT_BTN:
self.set_linear_velocity(Vector2(player_speed, 0));
else:
self.set_linear_velocity(Vector2(0, 0));
in Engine by (22 points)
edited by
1 Answer
+1 vote
Best answer
It's because the left and right code is overwriting the up and down code. You should use one Vector2 to set all of the movement, then set the linear velocity only once. Also the vector should be normalized so that diagonal movement is the same speed as horizontal and vertical movement.
func _fixed_process(delta):
var LEFT_BTN = Input.is_action_pressed('LEFT_BTN');
var RIGHT_BTN = Input.is_action_pressed('RIGHT_BTN');
var UP_BTN = Input.is_action_pressed('UP_BTN');
var DOWN_BTN = Input.is_action_pressed('DOWN_BTN');
var movement = Vector2(0, 0)
if UP_BTN:
movement.y = -1;
elif DOWN_BTN:
movement.y = 1;
if LEFT_BTN:
movement.x = -1;
elif RIGHT_BTN:
movement.x = 1;
movement = movement.normalized() * player_speed; #normalize movement
self.set_linear_velocity(movement);
by (676 points)
selected by
thank you very much.
makes sense when i look at it more and more.
bookmarked just in case ;)
coming back to godot has been a little confusing, but happy to be back ;)
again, thanks
Welcome to Godot Engine Q&A, where you can ask questions and receive answers from other members of the community.
Please make sure to read Frequently asked questions and How to use this Q&A? before posting your first questions.
Social login is currently unavailable. If you've previously logged in with a Facebook or GitHub account, use the I forgot my password link in the login box to set a password for your account. If you still can't access your account, send an email to [email protected] with your username. | __label__pos | 0.945251 |
Unanswered
128756
Contributor
Ralph du Plessis
Contributor
Contributor
MozPoints: 198
Good Answers: 1
Endorsed Answers: 1
1 Thumb up
Why is /crossdomain.xml showing up in my Google Analytics SEO landing pages?
Question posted by Ralph du Plessis on Feb 06, 2012 in Analytics, Reporting, and Technical SEO Issues | 523 Views
Can anyone tell me in plain English where Google Analytics is finding crossdomain.xml as one of my SEO landing pages?
As far as I understand its purpose is to allow things like Flash to access certain data on your domain in a secure and authorized manner.
There is no Google Analytics on this "page" and no crawler or user can see it so why is it showing up?
1 Response
• 374567
Journeymen
I see that in my raw logs, when it is being requested, but it isn't in the analytics.
It is mostly Chrome browsers requesting it. They get a 404 from me
| __label__pos | 0.972148 |
Template:CustomCSS
Revision Information
• Revision slug: Template:CustomCSS
• Revision title: Template:CustomCSS
• Revision id: 365415
• Created:
• Creator: Sheppy
• Is reviewed? Yes
• Reviewed:
• Reviewed by: Sheppy
• Is approved? Yes
• Is current revision? No
• Comment
Revision Source
Revision Content
@charset "utf-8"; /* This is a special template used as a custom CSS for MDN. *//* When editing this stylesheet, please be careful of different writing direction pages like: https://developer.mozilla.org/he/docs/HTML If you add a custom CSS class, please try one of the following. 1, Report to the (evil) leader. 2, Write a description about the class that you have added to -- https://developer.mozilla.org/en-US/docs/Project:Custom_CSS_Classes -- */ /* Iframe for live sample centered (Test) */ .centered iframe { display:block;margin:0px auto;border:none; } /* Simulate two columns for landing page */ .landing { display: table-row } .section { display: table-cell; width:50% } /* For ending "text alongside images" blocks; add this class to the first block that shouldn't be next to the image */ .endImageWrapping { clear: both; } /* Fancy Table of content for main pages of tutorial*/ .fancyTOC { counter-reset: fancyTOC; -moz-columns: 18em; columns: 18em; margin-bottom: 1em } .fancyTOC .button { display: block; margin-right: 0; margin-bottom: .25em; background-color: #A24C4C; color: #fff; font-size: 1.5em; line-height: 1.5 } .fancyTOC a::before { counter-increment: fancyTOC; content: counter(fancyTOC) ". " } .fancyTOC a::after { content: ' »' } .fancyTOC .button:hover, .fancyTOC .button:focus, .fancyTOC .button:active { background-color: #C26C6C } /* In index, dim obsolete, deprecated or non-standard element instead of striking through them */ s.deprecatedElement, s.obsoleteElement, s.nonStdElement { text-decoration: none; opacity: .3 } /* Add widgeted index, here adding an HTML5 badge as the bullet of the li element if class="html5" */ div.index.widgeted { -webkit-column-width: 10em; -moz-column-width: 10em; columns: 10em } div.index.widgeted li { padding-left: 18px } div.index.widgeted li.html5 { background-image: url("/files/3855/HTML5_Badge_16.png"); background-repeat: no-repeat; background-position: left 4px } [dir="rtl"] div.index.widgeted li { padding-right: 18px } [dir="rtl"] div.index.widgeted li.html5 { background-image: url("/files/3855/HTML5_Badge_16.png"); background-repeat: no-repeat; background-position: right 4px } [dir="rtl"] div.index.widgeted span { padding-right:24px } /* Quicknav template styles : https://developer.mozilla.org/en-US/docs/Template:quicknav -------- */ #wikiArticle { position: relative; } .quicknav { position: fixed; top: 237px; left: 33px; background: #FFF; z-index: 1 } .quicknav dl { max-width: 0; max-height: 300px; overflow: auto; margin: 0; padding: 10px 0; border: 1px solid transparent; border-left: none -webkit-transition: border .5s ease .1s, max-width .5s ease .1s, padding .5s linear .1s; -moz-transition: border .5s ease .1s, max-width .5s ease .1s, padding .5s linear .1s; -ms-transition: border .5s ease .1s, max-width .5s ease .1s, padding .5s linear .1s; -o-transition: border .5s ease .1s, max-width .5s ease .1s, padding .5s linear .1s; transition: border .5s ease .1s, max-width .5s ease .1s, padding .5s linear .1s } .quicknav dt, .quicknav dd { margin: 0; padding: 0; white-space: nowrap } .quicknav dt { font-weight: 700 } .quicknav dd + dt { margin-top: .5em } .quicknav:hover dl { max-width: 300px; padding: 10px 20px 10px 10px; border-color: #ECECE7 } .quicknav .showme { display: block; position: absolute; top: 10px; left: -31px; width: 30px; font: 2em sans-serif; color: #CCC; text-align: center; background: #FFF; border-radius: 5px 0 0 5px; border: 1px solid #ECECE7; border-right: none } .quicknav:hover .showme { color: #333; } .cleared { clear: both } /* This style has problem ! Style attribute of BR element is not work in Kuma */ .clearLeft { clear: left } /* ltr page only ? */ span.comment { display:none; } #wikiArticle .breadcrumbs { display: block; margin-bottom: 1em } /* The HTML5 landing page has some specificity */ table.html5ArticleToc { border-width: 5px } .html5ArticleToc thead th { padding: .5em 1em } .html5ArticleToc tbody td { vertical-align: middle } .html5ArticleToc ul { margin: 0; padding: 0 } .html5ArticleToc ul li { display: inline; margin: 0 .25em } /* The syntax box: the first one is now used anywhere (DOM, JS, CSS, ...) The twoparts is used for CSS properties */ pre.syntaxbox { border: 2px solid #ccc; margin-bottom: 1em; background-color: #ffe; border-radius: 10px; } pre.twopartsyntaxbox { border: 2px solid #bbb; margin-bottom: 0px; background-color: #ffe; border-radius: 10px 10px 0px 0px; } pre.twopartsyntaxbox + pre { border: 2px solid #bbb; border-radius: 0 0 10px 10px; border-top: none; margin-top: 0 } table.withoutBorder, table.withoutBorder td, table.withoutBorder tr, table.withoutBorder th { border: none } td.horizontalLine { border-left: none } td.column { border-bottom: none } td.bottomPart { border-top: none } td.verticalText { width: 3em; -webkit-transform: rotate(-90deg); -moz-transform: rotate(-90deg); -o-transform: rotate(-90deg); transform: rotate(-90deg) } table.blockTable { border-collapse: collapse } table.blockTable, table.blockTable td { margin: 1px; padding: 1px } table.blockTable .verticalColumn { border-left: none; border-right: none } /* The index page for HTML / CSS */ div.index { -webkit-columns: 14em; -moz-column-width: 14em; columns: 14em } div.index > span { font-family: Georgia, Times, 'Times New Roman', serif; font-size: 1.6em } div.index ul { margin-left: 0; padding-left: 0; list-style-type: none } /* ul.cssprop */ .cssprop { display: table; padding: 11px 22px; background-color: #eef } [dir="ltr"] .cssprop { clear: left; border-left: .15em solid } [dir="rtl"] .cssprop { clear: right; border-right: .15em solid } .cssprop li { display: table-row; padding: 3px; margin: 0; text-align: left } .cssprop li dfn { display: table-cell; padding: 0 5px; border-bottom: none; white-space: pre; cursor: inherit } .cssprop li dfn:after { content: ":" } .cssprop li li { display: list-item; list-style-type: disc; line-height: 1 } /* ul.htmlelt */ .htmlelt { display: table; padding: 11px 22px; background-color: #fe9 } [dir="ltr"] .htmlelt { clear: left; border-left: .15em solid } [dir="rtl"] .htmlelt { clear: right; border-right: .15em solid } .htmlelt li { display: table-row; padding: 3px; margin: 0; text-align: left } .htmlelt li dfn { display: table-cell; padding: 0 5px; border-bottom: none; white-space: pre; cursor: inherit } .htmlelt li dfn:after { content: ":" } .htmlelt li li { display: list-item; list-style-type: disc; line-height: 1 } /* https://developer.mozilla.org/en-US/docs/Template:HTML:Element_Navigation */ table.HTMLElmNav { margin: 1em auto; border-width: 5px } table.HTMLElmNav th, table.HTMLElmNav td { text-align: center } .method { margin-left: 10px; margin-bottom: 2em; margin-top: 1em } .method > .name { display: block; font-size: 13pt; margin-bottom: .2em } .method > .name > .param: after { content: ","; padding-right: .5em } .method > .name > .param: last-of-type: after { content: "" } .method > .name > .param > .name: after { content: " as "; font-weight: normal } .method > .name > .param: not(.required) { font-style: italic } .method > .name > .param: not(.required): before { content: "[" } .method > .name > .param: not(.required): after { content: "]" } .method > .description { display: block; font-size: 10pt; color: #444; font-style: italic; margin-bottom: 7px } .method > .name > .returns: before { content: " returns "; font-weight: normal; font-style: italic } .method > .name > .returns { font-weight: 700 } .method > .params { display: block; color: #555 } .method > .params > .param { display: block; margin-bottom: 5px } .method > .params > .param > .name { font-weight: 700; margin-right: .5em; min-width: 80px; display: inline-block } .method > .params > .param > .description { display: inline-block; width: 300px; vertical-align: top; margin-right: 30px } .method > .params > .param > .type { display: inline-block; width: 150px; vertical-align: top; font-weight: 700 } .method > .params > .param > .type: before { content: "Type "; color: #888; font-weight: normal } .method > .params > .param > .default { display: inline-block; width: 150px; vertical-align: top; font-weight: 700 } .method > .params > .param > .default: before { content: "Default "; color: #888; font-weight: normal } .geckoVersionNote { background-color: #E0E0FF; background-image: -moz-radial-gradient(0px 0 45deg, circle farthest-corner, #E0E0FF 0%, #F8F8FF 80%); border-left: 5px solid #008 } .geckoVersionNote p { padding-left: 4px; border: 0 } .geckoVersionHeading { background-color: #008; background-image: -moz-linear-gradient(left, #008 50%, #e0e0ff 80%); color: #fff; font: 16px/1.7 Verdana, Tahoma, sans-serif; margin-top: 0; margin-left: 0; margin-bottom: 4px; height: 42px } .geckoVersionHeading a:link { color: #ddd } .geckoVersionHeading a:visited { color: #eee } .geckoVersionHeading a:hover, .geckoVersionHeading a:focus { color: #fdd } .sidebar-box { margin-left: 16px; margin-bottom: 2.5em; border-right: 1px solid #ecf1f3; padding: 12px; background: #f1f6f8 url("img/toc-bg.png") 0 0 no-repeat; font: 12px Verdana, Tahoma, sans-serif } .sidebar-box h2, .sidebar-box h3, .sidebar-box h5 { margin-bottom: .5em; font-family: inherit; font-weight: 700 } .sidebar-box h2 { font-size: 1.5em } .sidebar-box h3 { font-size: 1.1em } .sidebar-box h5 { font-size: .9em } .sidebar-box > ul { float: left; text-align: left; display: block; padding: 0; margin: auto 0; position: relative } .sidebar-box ul > li { list-style-type: none } .tab-content-box { border-right: 1px solid #ecf1f3; margin-left: 16px; padding: 12px; background: #fff; font: 12px Verdana, Tahoma, sans-serif } .tab-content-box h2, .tab-content-box h3, .tab-content-box h5 { margin-bottom: .5em; font-family: inherit; font-weight: 700 } .tab-content-box h2 { font-size: 1.5em } .tab-content-box h3 { font-size: 1.1em } .tab-content-box h5 { font-size: .9em } #menuFiller { display: block } table.compat-table { border-color: #bbb; margin: 0 } table.compat-table td { padding: 5px; border: 1px solid #ccc; background: #fff; vertical-align: top; word-wrap: break-word } table.compat-table th, table.compat-table td.header { border: 1px solid #bbb; padding: 0 5px; background: #eee; font-weight: 700 } .archivedContentBlock { margin: 0; background-color: #fdd } div.action-driven { display: inline-block; float: none; min-height: 8.5em; margin: .8em; box-shadow: .2em .1em .1em #808080; vertical-align: top; } div.action-driven > div { font: 400 20px 'Bebas Neue', 'League Gothic', Haettenschweiler, Impact, 'Arial Narrow', Meiryo, sans-serif; text-transform: uppercase } /*----------------- for Topic page */ table.topicpage-table, table.topicpage-table td, table.topicpage-table tr, table.topicpage-table th { border: none } .topicpage-table .section { padding-right:1em} /* "table.topicpage-table h2 { --font-setting-- }" is better than this. */ .topicpage-table .Documentation, .topicpage-table .Community, .topicpage-table .Tools, .topicpage-table .Related_Topics { background-image: url('/media/img/icons-sections.png'); background-repeat: no-repeat; margin: 0 0 .25em; padding: 18px 18px 0 65px; min-height: 48px; font: 400 28px/1 'Bebas Neue', 'League Gothic', Haettenschweiler, Impact, 'Arial Narrow', Meiryo, sans-serif; text-transform: uppercase; border: none } [dir="rtl"] .topicpage-table .Documentation, [dir="rtl"] .topicpage-table .Community, [dir="rtl"] .topicpage-table .Tools, [dir="rtl"] .topicpage-table .Related_Topics { padding: 18px 65px 0 18px; } .topicpage-table .Documentation { background-position: 0 0 } .topicpage-table .Community { background-position: 0 -200px } .topicpage-table .Tools { background-position: 0 -400px } .topicpage-table .Related_Topics { background-position: 0 -600px } [dir="rtl"] .topicpage-table .Documentation { background-position: right 0 } [dir="rtl"] .topicpage-table .Community { background-position: right -200px } [dir="rtl"] .topicpage-table .Tools { background-position: right -400px } [dir="rtl"] .topicpage-table .Related_Topics { background-position: right -600px } /* for Topic Page 2 (This is Test )*/ /* USAGE:
A
B
*/ .topicpage2col { margin: 1em 0; padding: 0 10px; zoom: 1; } .topicpage2col::after { content: "."; display: block; height: 0; clear: both; visibility: hidden; } .topicpage2col .A, .topicpage2col .B { width: 50%; position: relative } .html-ltr .topicpage2col .A, .html-rtl .topicpage2col .B { float: left; left: -10px } .html-ltr .topicpage2col .B, .html-rtl .topicpage2col .A { float: right; right: -10px } .topicpage2col h2 { background-image: url('/media/img/icons-sections.png'); background-repeat: no-repeat; margin: 0 0 .25em; padding: 18px 18px 0 65px; min-height: 48px; font: 400 28px/1 'Bebas Neue', 'League Gothic', Haettenschweiler, Impact, 'Arial Narrow', Meiryo, sans-serif; text-transform: uppercase; border: none } .html-rtl .topicpage2col h2 { padding: 18px 65px 0 18px; } .html-ltr .topicpage2col .Documentation { background-position: 0 0 } .html-ltr .topicpage2col .Community { background-position: 0 -200px } .html-ltr .topicpage2col .Tools { background-position: 0 -400px } .html-ltr .topicpage2col .Related_Topics { background-position: 0 -600px } .html-rtl .topicpage2col .Documentation { background-position: right 0 } .html-rtl .topicpage2col .Community { background-position: right -200px } .html-rtl .topicpage2col .Tools { background-position: right -400px } .html-rtl .topicpage2col .Related_Topics { background-position: right -600px } /* ---------------------------- Styles for Indicator */ .inlineIndicator, .indicatorInHeadline, .blockIndicator, .overheadIndicator { box-shadow: 1px 1px 1px #CCC } /* styles for inline indicator */ .inlineIndicator /* This class is base of inlineIndicator */ , .megaWarningInline { border-width: 1px; border-style: solid; font-size: smaller; white-space: nowrap; padding: 0 2px; margin: 0 2px } .todoInline { border-left-width: 8px; white-space: normal } .megaWarningInline { background: red; border-color: #818151; color: #fff; font-size: small; font-weight: 700; float: right; vertical-align: middle; /* Mega warnings: deep red backgrounds etc */ } .notXPCOMInline, .noscriptInline { font-weight: 700 } /* styles for overhead indicator */ .overheadIndicator /* This class is base of overheadIndicator */ , .standardNoteHeader, .standardNoteBlock { border: 1px solid #000; overflow: hidden; margin: 10px 0; padding: 0 10px; text-align: center } .overheadIndicator p, .blockIndicator p { margin: 4px 0 } .draftHeader strong { display: block; padding-top: 10px } .draftHeader div { font-size: x-small; padding-bottom: 10px } .warningHeader { background: #FBEDEC url(/files/578/Nuvola_apps_important.png) no-repeat 9px 10px !important; min-height: 103px; padding: 1em 1em 1em 130px !important; overflow: hidden; } html[dir="rtl"] .warningHeader { padding: 1em 130px 1em 1em !important; background-position: 99% 10px !important; } /* styles for Method indicator */ .indicatorInHeadline, .standardNoteMethod { border: 1px solid #000; padding: 1px 2px; white-space: nowrap; vertical-align: middle; font-size: smaller; float: right; } [dir="rtl"] .indicatorInHeadline, [dir="rtl"] .standardNoteMethod { float: left; } .renamedMethod, .noscriptMethod, .deprecatedMethod, .unimplementedMethod { font-weight: 700 } /* color setting */ .draft { background: #FFC; border-color: #990 } .htmlVer { background: #CEE; border-color: #267373 } .standardNote { background: #fef49c; border-color: maroon } .optional { background: #C0FFC7; border-color: #50AF5B } .nonStandard { background: #FC6; border-color: #960 } .unimplemented { background: #f9c; border-color: #f66f6f } .deprecated { background: #CCF; border-color: blue } .renamed { background: #d4f3ff; border-color: #818151 } .obsolete { background: #FEBCBC; border-color: #f00 } .domLevel { background: #e1e1ff; border-color: #818151 } .todo { background: #ff9; border-color: #c1272d } .minVer { background: #ffffe1; border-color: #818151 } .jsMinVer { background: #ffffe1; border-color: #818151 } .mbMinVer { background: #ffffe1; border-color: #818151 } .mobileOnly { background: #FFE1BE; border-color: maroon } .renamed { background: #d4f3ff; border-color: #818151 } .notXPCOM { background: orange; border-color: #818151 } .renamed { background: #d4f3ff; border-color: #818151 } .projectSpecific{ background: #ffe1be; border-color: #818151 } .prefixBox { background: #e4edf7; border-color: #818151 } .experimental { background: #ffefd9; border-color: #ff9500 } .readOnly { background: #222; border-color: #222; color: #FFF } .noscript { background: red; border-color: #818151; color: #FFF } .projectSpecific{ background: #ffe1be; border-color: #818151; } /* Banner used to indicate support only in specific Gecko projects */ .translationInProgress { background: #A3F5A3; border-color: #14B814 } /* style for next and previous pages in tutorial */ a.nextPage { display: inline-block; margin: 0 3px 1em; font-size: 1.25em } a.nextPage:link, a.nextPage:visited { background-color: #069 } a.nextPage:hover, a.nextPage:focus { background-color: #0099E6 } a.nextPage { float:right } [dir="rtl"] a.nextPage { background-position: left 40% !important; padding: .35em .75em .2em !important; position: left;} a.previousPage { display: block; margin: 0 1em 2em 0; font-size: 1.25em } a.previousPage:link, a.previousPage:visited { background-color: #069 } a.previousPage:hover, a.previousPage:focus { background-color: #0099E6 } a.previousPage { float:left } [dir="rtl"] a.previousPage { background-position: left 40% !important; padding: .35em .75em .2em !important; position: right;} /* styles for liveSample link : https://developer.mozilla.org/en-US/docs/Template:LiveSample */ a.liveSample { display: inline-block; margin: 0 3px 1em; font-size: 1.25em } a.liveSample:link, a.liveSample:visited { background-color: #069 } a.liveSample:hover, a.liveSample:focus { background-color: #0099E6 } [dir="rtl"] a.liveSample { background-position: left 40% !important; padding: .35em .75em .2em !important; } /* callout boxes etc */ .standardSidebar { border: 1px solid #777; margin: 0 0 15px 15px; padding: 0 15px 5px; float: right; background: #eee; font-size: .85em; position: relative; z-index: 2; } [dir="rtl"] .standardSidebar { margin: 0 15px 15px 0; float: left } /* Colored text for INCORRECT/BAD and CORRECT/GOOD values (and BEST)*/ .correct { color: green } .incorrect { color: red } .best { color: #396 } /* Item in a list of item that has been done */ .done { text-decoration: line-through } /* Specific items for tutorials */ .tuto_details, .tuto_example { border: 1px solid #36b; padding: .5em; margin-bottom: 1em } .tuto_details { background: #f4f4f4 } .tuto_example { background: #fffff4 } .tuto_type { font-weight: 700; text-align: left } /* This is a specific item for the CSS tutorial: https://developer.mozilla.org/en-US/docs/CSS/Getting_Started/Selectors */ /* To be removed once we have Live Examples */ a.tutospecial:link, a.tutospecial:visited { padding: 1px 10px; color: #fff; background: #555; border-radius: 3px; border: 1px outset #333 /* fallback for next line */ ; border: 1px outset rgba(50,50,50,.5); font-family: georgia, serif; font-size: 14px; font-style: italic; text-decoration: none; } a.tutospecial:hover, a.tutospecial:focus, a.tutospecial:active {background-color: #666;} #tutochallenge { display:none } #tutochallenge a.hideAnswer { font-size:smaller; text-align:right ; display:block} #tutochallenge:target { display:block;} #tutochallenge + a { font-size:smaller; text-align:right; display:block} #tutochallenge:target + a { display:none; } /* Fix for mdn-min.css : https://developer.mozilla.org/media/css/mdn-min.css */ [dir="rtl"] dd { margin: 0 22px .5em 0 } [dir="rtl"] th, [dir="rtl"] caption { text-align: right } [dir="rtl"] .title h1 { padding: 0 18px 0 0 !important } [lang="he"] #site-search { float: left; text-align: right; margin: -8px 0 0 100px; width: 295px } [lang="de"] #site-search { margin-right: 179px } /* Fix for wiki-min.css : https://developer.mozilla.org/media/css/wiki-min.css */ div.bug, div.warning { overflow: hidden } .page-content dd { margin: 0.5em } /* [dir="rtl"] .translate #page-buttons { } */ [dir="rtl"] .hasJS #nav-main #nav-sub-docs, [dir="rtl"] #nav-main .menu:hover #nav-sub-docs { margin-left: -300px } [dir="rtl"] #nav-main #nav-sub-docs li { float: right } [dir="rtl"] #nav-main #nav-sub-docs p { text-align: left } [dir="rtl"] .page-content ul { padding-left: 0; padding-right: 22px } [dir="rtl"] .page-content dd { padding-left: 0; padding-right: 15px } [dir="rtl"] .article a.external, [dir="rtl"] .article a[rel~="external"], [dir="rtl"] .article a[href^="news://"] { padding-right: 0; background-position: 1% 40% } [dir="rtl"] .article a.external, [dir="rtl"] .article a[rel~="external"] { padding-left: 16px } [dir="rtl"] .article a[href^="news://"] { padding-left: 18px } /* for B2G docs, to allow use of B2G lookalike UX elements */ /* For display of Firefox OS UX screenshots in a grid */ table.fxosScreenGrid, table.fxosScreenGrid tr { border: none; } table.fxosScreenGrid td, table.fxosScreenGrid th { border: none; width: 340px; vertical-align: top; } div.fxosLiveSampleWrapper { width: 328px; padding-top: 6px; padding-bottom: 6px; text-align: center; background-color: #F3ECDD; border-color: #D4C3A1; } .rightButtonBox { float: right; width: 250px; position: relative; z-index: 2; } [dir="rtl"] .rightButtonBox { margin: 0 15px 15px 0; float: left } /* ---------------------------------- * Buttons * ---------------------------------- */ .b2g-button::-moz-focus-inner { border: none; outline: none; } .b2g-button { width: 100%; height: 3.8rem; margin: 0 0 1rem; padding: 0 1.5rem; -moz-box-sizing: border-box; display: inline-block; vertical-align: middle; text-overflow: ellipsis; white-space: nowrap; overflow: hidden; background: #fafafa url(https://developer.mozilla.org/media/gaia/shared/style/buttons/images/ui/default.png) repeat-x left bottom; border: 0.1rem solid #a6a6a6; border-radius: 0.2rem; font: 500 1.6rem/3.8rem 'MozTT', Sans-serif; color: #333; text-align: center; text-shadow: 0.1rem 0.1rem 0 rgba(255,255,255,0.3); text-decoration: none; outline: none; } /* Press (default & recommend) */ .b2g-button:active, .b2g-button.recommend:active, .b2g-button:hover, .b2g-button.recommend:hover { border-color: #008aaa; background: #008aaa; text-decoration: none; color: #333; } /* 一時的なスタイル */ [lang="ja"] #doc-contributors time[datetime*="200"]::after { content: " (※最終更新日から 2 年以上経過しています)"; color: red }
Revert to this revision | __label__pos | 0.997708 |
Merging random data from two excels to create a new excel
#1
Hi,
I have to add data of a particular column from one excel under specific column of other excel thus creating a new excel all together.
Like “employee ID” from excel 1 goes under “E-ID” of excel 2,
“Task Assigned” in excel 1 goes under “Activity” in Excel 2 etc
The sequence of columns to be merged in excel 1 and 2 is not the same, as in 1st column of excel 1 is 3rd column in excel 2 etc.
Can some one please guide me as to how should i go about to create this new excel. It will really be a great help! Thank you in advance!
#3
@Shatakshi_Mishra
Suppose if you want merge all the columns from excel sheet 1 to excel sheet 2 then
re arrange the columns in such a way that you want merge the columns of excel sheet 1 with sheet 2
For example
excel sheet 1 having columns abc, bcd, cde let us take it dt1
excel sheet 2 having columns hfg, ijh, mno let us take it dt2
so you want to merge bcd to hfg, cde to ijh, abc to mno.
then create the array of columns in the order arr A={bcd,cde,abc}
then use dt3 = dt1.Select().CopyToDataTable().DefaultView.ToTable(False,arr A)
Now use Merge DataTable Activity
give source as dt3 and destination as dt2
Regards,
Mahesh
#4
Thank you for your help!
But 1st excel has 8 columns and 2nd has 24 so it’s like Column1 from 1st excel is to be merged into Column3 of 2nd excel and column2 of 1st is to be merged with column2 of 2nd and so on… what i mean to say is the order of columns to be merged is not uniform… so how do i go about??
#5
@Shatakshi_Mishra
Run for each row for excel sheet 1
Add Data row activity to add a new row for excel sheet 2
and add the values of respective columns.
For Example
excel sheet 1 have 3 columns and excel sheet 2 having 4 columns
You want to add ist column of 1st excel sheet to 3rd column of 2nd excel sheet
then Add data row
{"","",row(“Column 3”).ToString,""}
Like this
Regards,
Mahesh
#6
Thank you so much!
#7
Hey! sorry to bother you again, I am actually quite new to Uipath so can you please help?
Is there any way i can access the individual elements of a cell under a particular column?
Because i have to merge the 2 excels using the employee ids from the two excel sheets. so is there any way i can access the value stored, one cell at a time, under employee id and check their equality?
#8
@Shatakshi_Mishra
You can access if you know the column name and row index
Regards,
Mahesh
#9
can you please elaborate a bit?
#10
@Shatakshi_Mishra
You are having a datatable dt
dt having 6 columns and 20 rows.
You can access 6th row and 5th column value like this
string a= dt(5)(4).ToString or dt(5)(“Column Name”).ToString
Regards,
Mahesh
#11
Okay Thank you!!
#12
Hello Mahesh,
I followed your points and done one work flow, But I’m getting some error on
dt3 = dt1.Select(“Employee Name”).CopyToDataTable().DefaultView.ToTable(False,arr A)
Error is: Assign : Syntax error: Missing operand after ‘Name’ operator.
Please request to solve this
Regards,
Ganesh | __label__pos | 0.746345 |
Skip to content
Instantly share code, notes, and snippets.
@aliang
aliang/user_form.html
Last active Dec 29, 2015
Embed
What would you like to do?
<form>
Name: <input type="text" name="name"/>
Age: <input type="text" name="age"/>
<input type="submit" value="Submit"></input>
</form>
var UserForm = Backbone.View.extend({
events: {'submit': 'save'},
initialize: function() {
_.bindAll(this, 'save');
},
save: function() {
// http://api.jquery.com/serializeArray/
var arr = this.$el.serializeArray();
// This accumulates the name/value hashes into a single hash which is much easier to submit to forms
var data = _(arr).reduce(function(acc, field) {
acc[field.name] = field.value;
return acc;
}, {});
this.model.save(data);
return false;
}
});
var userForm = new UserForm({el: this.$('form'), model: new User()});
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time. | __label__pos | 0.635865 |
August 3, 1999
What Final Does in Java
When you declare a class as final, it can no longer be subclassed. Java has used this for security purposes with classes like String and Integer. It also allows the compiler to make some optimizations. When you declare a method of a class as final, subclasses of the class cannot
Understanding Interfaces
An interface describes the public methods that a class should implement along with the calling conventions of those methods. An Interface says nothing about implementation of its methods. In essence, an interface is a contract of guaranteed operations, or behaviors. A class that implements an interface must provide code to
What ‘abstract’ does
When you declare a class as abstract, it cannot be instantiated. Only subclasses of an abstract class can be instantiated if they are not abstract themselves. When you declare a method as abstract, that method is not implemented in the current class, i.e. it has no body
What ‘synchronized’ does to a method
When you synchronize a method of a class, as in: public synchronized int getAge(){ …} that method will acquire a monitor lock before execution. If the synchronized method is static, e.g. a class method, the lock is obtained over the class. Otherwise, the lock is obtained over an instance object
What ‘static’ Does
When you declare a field variable of a class as static, that field variable is instantiated only once, no matter how many instances of the class are created. In other words, a static field variable is a class variable, and, not an instance variable. If the value of a static
Persistence and Serialization
Question: What is the best source, print or online, for learning how to implement persistence in C++? I want to save my objects to disk and restore them, and I want to learn how to do this from scratch, not by purchasing someone’s library. Answer: I don’t know which compiler
SQL Server 6.5: Sometimes There, Sometimes Not
Question: I am using SQL Server 6.5 for my database needs and Visual C++ as a front-end. The problem is that sometimes when I switch on the server (where the SQL Server is also running) and try to access the database, I cannot see any database items in the drop-down
Using ADO-Connected Recordsets with MTS
Question: Can I use ADO-connected recordsets with MTS? From the information I can find it seems that MTS does not handle them. Answer: You can, but the question is, do you really want to? Using a connected (rather than disconnected) recordset means that you must maintain a stateful connection to
Blinking Borders in Spy++
Question: How does Spy++ make the borders of a window blink when you select “Highlight” from the right-click menu? I have a MouseProc in the hook chain, so I can get the handle of the window that the mouse is over; I just need to know how to make the
No more posts to show | __label__pos | 0.875151 |
.after()
html+div+css+天猫商城
html5+css3+京东手机网站
javascript+jquery+ajax
正则表达式+面向对象+js插件
2D+3D+触屏事件+Canvas+Svg
地理信息+本地存储+H5拖拽api
查看课程大纲
.after( content [, content ] )返回: jQuery
描述: 在匹配元素集合中的每个元素后面插入参数所指定的内容,作为其兄弟节点。
• 添加的版本: 1.0.after( content [, content ] )
• content
类型: htmlString or Element or Array or jQuery
HTML字符串,DOM 元素,文本节点,元素和文本节点的数组,或者jQuery对象,用来插入到集合中每个匹配元素的后面(愚人码头注:外部插入)。
• content
类型: htmlString or Element or Array or jQuery
一个或多个附加的DOM元素,文本节点,元素和文本节点的数组,HTML字符串,或jQuery对象,用来插入到集合中每个匹配元素的后面(愚人码头注:外部插入)。
• 添加的版本: 1.4.after( function )
• function
类型: Function( Integer index ) => htmlString or Element or jQuery
一个返回HTML字符串,DOM元素(或多个),文本节点(或多个),或jQuery对象的函数,返回的内容用来插入到集合中每个匹配元素的后面(愚人码头注:外部插入)。 接收元素集合中该元素的索引位置作为一个参数(index 参数)。在函数中this指向元素集合中的当前元素。
• 添加的版本: 1.10.after( function-html )
• function
类型: Function( Integer index, String html ) => htmlString or Element or jQuery
一个返回HTML字符串,DOM元素,jQuery对象的函数,返回的内容用来插入到集合中每个匹配元素的后面(愚人码头注:外部插入)。接收元素集合中该元素的索引位置(愚人码头注:index 参数)和元素的原来HTML值(愚人码头注:html 参数)作为参数。在函数中this指向元素集合中的当前元素。
.after().insertAfter()实现同样的功能。主要的不同是语法——内容和目标的位置不同。 对于.after(),要插入的内容来自方法的参数:$(target).after(contentToBeInserted) (愚人码头注:即,选择器表达式在方法的前面,参数是将要插入的内容) 。 对于.insertAfter(), 刚好相反,内容在方法前面并且插入到目标的前面, 而目标是传递给.insertAfter()方法的参数: $(contentToBeInserted).insertAfter(target)
请看下面的HTML:
1
2
3
4
5
<div class="container">
<h2>Greetings</h2>
<div class="inner">Hello</div>
<div class="inner">Goodbye</div>
</div>
我们可以创建内容然后同时插在好几个元素后面:
1
$('.inner').after('<p>Test</p>');
每个内部的 <div> 元素得到新的内容:
1
2
3
4
5
6
7
<div class="container">
<h2>Greetings</h2>
<div class="inner">Hello</div>
<p>Test</p>
<div class="inner">Goodbye</div>
<p>Test</p>
</div>
我们也可以在页面上选择一个元素然后插在另一个元素后面:
1
$('.container').after($('h2'));
如果这样,一个被选中的元素插入被到DOM中的一个别的位置, 它将移动到目标的前面(而不是克隆):
1
2
3
4
5
<div class="container">
<div class="inner">Hello</div>
<div class="inner">Goodbye</div>
</div>
<h2>Greetings</h2>
重要提示: 如果有多个目标元素,将要为每个目标元素创建插入元素的克隆副本,除了最后一个。 然而,对于除了最后一个每个目标将要创建的插入元件的克隆副本。
如果有多个目标元素,内容将被复制然后被插入到每个目标后面。
Inserting Disconnected DOM nodes(插入分离的DOM节点)
对于jQuery 1.4, .before().after()同时也会对分离的DOM元素有效。例如,下面的代码:
1
$('<div/>').after('<p></p>');
结果是一个包含一个div和一个段落的JQuery集合。因此,我们可以更进一步操作这个集合,即使在将它插入document之前。
1
2
3
4
$('<div/>').after('<p></p>').addClass('foo')
.filter('p').attr('id', 'bar').html('hello')
.end()
.appendTo('body');
结果是下面的代码被插到</body>标签之前:
1
2
<div class="foo"></div>
<p class="foo" id="bar">hello</p>
Passing a Function(传递一个函数)
从jQuery 1.4开始, .after()允许我们传入一个函数,该函数返回要被插入的元素。
1
2
3
$('p').after(function() {
return '<div>' + this.className + '</div>';
});
上面的代码在每个段落后插入一个<div><div>里面是该段落的class名称。
Additional Arguments(额外的参数)
和其他添加内容的方法类似, 如.prepend().before(), .after() 还支持传递输入多个参数。 支持的输入包括DOM元素,jQuery对象,HTML字符串,DOM元素的数组。
例如,下面将插入两个新的<div>和现有的<div>到第一个段落后面:
1
2
3
4
5
var $newdiv1 = $("<div id='object1'></div>"),
newdiv2 = document.createElement('div'),
existingdiv1 = document.getElementById('foo');
$('p').first().after($newdiv1, [newdiv2, existingdiv1]);
.after() 可以接受任何数量的额外的参数,所以上面的例子中,也可以将三个独立的 <div> 分别作为参数传给该方法,就像这样$('p').first().after($newdiv1, newdiv2, existingdiv1)。参数的类型和数量,将在很大程度上取决于你是如何选择元素的。
其他注意事项:
• 在此之前的jQuery1.9, 如果该集合中的第一个节点没有在文档中, .after()将尝试添加 或 在当前的jQuery集合改变节点,在这种情况下返回一个新的jQuery集合,而不是原来的集合。该方法可能会或可能不会返回一个新的结果,这取决于它的参数个数或参数的连贯性! 从jQuery1.9开始,.after(), .before(), 和 .replaceWith()总是返回原始未修改的集合。 试图在一个没有父级元素的节点上使用这些方法是没有效果的,也就是说,集合和它包含的节点都不会改变。
• 设计上,任何jQuery的构造或方法,都接受一个HTML字符串(作为参数) - jQuery(), .append(), .after()等 - 可以潜在地执行代码。 这可能会出现注入script标签或使用HTML属性 执行的代码(例如,<img onload="">)。 不要使用这些方法来插入来自不受信任的来源,如网址查询参数,Cookie或表单输入获得的字符串。 这样做可能会引起跨站点脚本(XSS)漏洞。 将内容添加到文档之前删除或避开用户任何输入内容。
例子:
Example: 在所有的段落后插入一些HTML。
1
2
3
4
5
6
7
8
9
10
11
12
<!DOCTYPE html>
<html>
<head>
<style>p { background:yellow; }</style>
<script src="http://code.jquery.com/jquery-latest.js"></script>
</head>
<body>
<p>I would like to say: </p>
<script>$("p").after("<b>Hello</b>");</script>
</body>
</html>
Demo:
Example: 在所有的段落后插入一个DOM元素。
1
2
3
4
5
6
7
8
9
10
11
12
<!DOCTYPE html>
<html>
<head>
<style>p { background:yellow; }</style>
<script src="http://code.jquery.com/jquery-latest.js"></script>
</head>
<body>
<p>I would like to say: </p>
<script>$("p").after( document.createTextNode("Hello") );</script>
</body>
</html>
Demo:
Example: 在所有段落后插入一个jQuery对象。
1
2
3
4
5
6
7
8
9
10
11
12
<!DOCTYPE html>
<html>
<head>
<style>p { background:yellow; }</style>
<script src="http://code.jquery.com/jquery-latest.js"></script>
</head>
<body>
<b>Hello</b><p>I would like to say: </p>
<script>$("p").after( $("b") );</script>
</body>
</html>
Demo: | __label__pos | 0.857312 |
Differential equations on mathematica
In summary, Mathematica is a computational software program used for mathematical, scientific, and engineering calculations. It has a built-in function for solving differential equations, making it a popular tool for scientists and mathematicians. To input a differential equation, you can use the DSolve function which returns the general solution. Mathematica is capable of solving a wide range of differential equations, including ordinary, partial, and boundary value problems. You can plot the solution using the Plot function and check for accuracy using the NDSolve function.
• #1
Rafique Mir
5
0
Any one who can solve these differential equations on mathematica or on some other software. Equations are attached as a file.
Attachments
• Differential equations.doc
54 KB · Views: 202
Last edited by a moderator:
Physics news on Phys.org
• #2
Read these rules first: https://www.physicsforums.com/showthread.php?t=28
Show your work and explain where you are stuck and need help; we don't just do homework for you.
• #3
There are many individuals who are proficient in using Mathematica or other software to solve differential equations. These types of equations are commonly used in various fields such as physics, engineering, and economics. Solving them can provide valuable insights and predictions for real-world problems.
In order to solve the attached differential equations in Mathematica, one can use the built-in function DSolve[eqn, y, x] where "eqn" represents the differential equation, "y" is the dependent variable, and "x" is the independent variable. This function will return the general solution to the differential equation.
Alternatively, one can also use the NDSolve[eqns, y, {x, x0, x1}] function to numerically solve a system of differential equations. This is useful for cases where an analytical solution is not possible.
It is important to note that solving differential equations can be a complex task and may require advanced knowledge in mathematics and programming. Therefore, it is recommended to seek the assistance of a professional or a knowledgeable individual if you are not familiar with these concepts.
In conclusion, Mathematica and other software are powerful tools for solving differential equations and can greatly aid in understanding and solving real-world problems. I hope this information is helpful in your pursuit of solving these equations.
1. What is Mathematica and how does it relate to differential equations?
Mathematica is a powerful computational software program used for mathematical, scientific, and engineering calculations. It has a built-in function for solving differential equations, making it a popular tool for scientists and mathematicians.
2. How do I input a differential equation into Mathematica?
To input a differential equation into Mathematica, you can use the DSolve function. This function takes in the differential equation and any initial conditions, and returns the general solution.
3. Can Mathematica solve any type of differential equation?
Mathematica is capable of solving a wide range of differential equations, including ordinary differential equations, partial differential equations, and boundary value problems. However, there may be some equations that it cannot solve.
4. Can I plot the solution to a differential equation in Mathematica?
Yes, you can use the Plot function in Mathematica to plot the solution to a differential equation. This allows you to visualize the behavior of the solution over a given range of values.
5. Is there a way to check the accuracy of the solution obtained from Mathematica?
Yes, you can use the NDSolve function in Mathematica, which uses numerical methods to solve differential equations. This function also allows you to specify the desired level of accuracy for the solution.
Similar threads
• MATLAB, Maple, Mathematica, LaTeX
Replies
4
Views
1K
• Introductory Physics Homework Help
Replies
8
Views
1K
• Introductory Physics Homework Help
Replies
12
Views
378
• Programming and Computer Science
Replies
6
Views
2K
• Introductory Physics Homework Help
Replies
2
Views
2K
• Precalculus Mathematics Homework Help
Replies
6
Views
599
• Introductory Physics Homework Help
Replies
16
Views
1K
• MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
1K
• Introductory Physics Homework Help
Replies
4
Views
828
• Introductory Physics Homework Help
Replies
11
Views
1K
Back
Top | __label__pos | 0.999555 |
<iframe src="https://www.googletagmanager.com/ns.html?id=GTM-TPNJH6W" height="0" width="0" style="display:none;visibility:hidden" title="Google Tag Manager"></iframe>
How to Manage Multiple Discord Accounts in 2024: A Comprehensive Guide
How to Manage Multiple Discord Accounts in 2024: A Comprehensive Guide
Discord has become a vital tool for communication, whether you're gaming with friends, working with a team, or managing online communities. But when it comes to handling multiple accounts, things can get a bit tricky.
Whether you're separating work and play or managing several communities, knowing how to effectively run multiple Discord accounts is essential.
Let’s explore the best ways to juggle multiple Discord accounts in 2024, including some handy tools that make the process smooth and hassle-free.
Why Multiple Discord Accounts Might Be Useful
There are plenty of reasons you might want to use more than one Discord account. Maybe you want to keep your work life and personal life separate, or perhaps you’re managing multiple communities that require distinct identities. Having separate accounts can make your online life more organized and manageable.
Here’s how you can make it work for you:
• Work-Life Balance: Keep your professional and personal Discord interactions distinct without any mix-ups.
• Efficient Management: If you’re an admin or moderator across multiple servers, separate accounts can make your job easier.
• Privacy Matters: Sometimes, it’s just nice to keep certain parts of your online presence private from others.
Now that we’ve covered why multiple accounts might be useful, let’s look at how you can effectively manage them.
Discord’s Built-In Account Switcher
Discord provides a handy Account Switcher feature that allows users to manage up to five accounts directly in the app.
It’s pretty simple to use:
• Accessing the Account Switcher: Click on your avatar in the bottom left corner of the Discord app.
• Switching Between Accounts: Choose “Switch Accounts” from the menu to toggle between your different profiles.
This feature works well for those who only need to manage a few accounts. But what if you need to handle more than five?
Let’s explore some other methods.
Managing More Than 5 Discord Accounts
1. Logging In and Out Manually
One of the simplest methods to manage multiple accounts is to log in and out manually. It works, but it’s far from convenient. You’ll need to keep track of your login details and be ready to enter them each time you switch accounts. It’s easy to miss important messages when you’re only logged into one account at a time.
2. Using Multiple Browser Windows
Another method is to use different browser windows, each logged into a different Discord account. While this might seem like a straightforward solution, it can quickly become chaotic if you’re juggling several accounts. Finding the right window can become frustrating, especially when all the windows start to look the same.
3. Creating Separate Chrome Profiles
Chrome profiles allow you to create distinct environments within your browser. Each profile can be logged into a different Discord account, which keeps them neatly separated. However, you’ll need to remember which profile is linked to which account, and it can still require a bit of manual work.
4. Using the Discord Desktop App Alongside a Browser
A simple trick that many users find helpful is using the Discord desktop app for one account and a browser for another. This allows you to manage two accounts simultaneously without too much hassle. It’s a decent solution if you don’t have too many accounts to handle.
Advanced Methods for Managing Multiple Discord Accounts
1. Discord Canary and PTB
For those needing more accounts, Discord offers alternative versions of its app: Canary (the Alpha version) and PTB (Public Test Build or Beta). You can install these versions alongside the main app, each logged into a different account. This setup lets you manage up to 15 accounts across different Discord versions.
Keep in mind that these versions are for testing, so they might not be as stable as the regular app. But if you need more flexibility, this could be a useful option.
2. Cloning Apps on Mobile
On mobile devices, app cloning can be an effective way to manage multiple Discord accounts. Cloning apps allow you to duplicate the Discord app and log into separate accounts on each clone. However, this method comes with its downsides:
• Battery Drain: Cloning apps often run in the background, which can drain your battery faster.
• Security Concerns: Some cloning apps may not be as secure, so choosing the right one is crucial.
This method can work, but it’s important to weigh the potential risks, like account bans or other issues, especially if Discord detects unusual activity.
3. Using Multiple Devices
If you have access to multiple devices, each one can be logged into a different Discord account. While this is a straightforward solution, it can quickly become expensive and cumbersome, especially if you need to manage many accounts.
The Ultimate Solution: Multi-Account Browsers
If managing multiple accounts feels like a hassle, multi-account browsers like Multilogin offer a much more streamlined approach. These browsers are designed to handle multiple accounts across various platforms, including Discord, in one clean interface.
What Makes Multilogin Stand Out:
• Unified Management: Keep all your accounts in one place, with the ability to manage 50, 500, or even more accounts from a single dashboard.
• Account Organization: Easily name your accounts, group them into folders, and add tags or notes to keep everything organized.
• Team Collaboration: Share accounts with team members without sharing passwords or dealing with endless 2FA tokens.
• Enhanced Privacy: Multilogin creates unique digital fingerprints for each account, keeping them secure and isolated.
With Multilogin, you can effortlessly switch between accounts, maintain privacy, and manage everything efficiently. It’s an ideal tool for anyone who needs to juggle multiple Discord accounts without the headaches.
Managing Multiple Discord Accounts on Mobile
Managing multiple Discord accounts on a mobile device can be tricky, but it’s possible with the right approach. Tools like Multilogin allow you to create separate browser profiles on your mobile device, each linked to a different Discord account. This method helps prevent detection issues and keeps your accounts organized.
Switching between these profiles is seamless, and you can maintain unique settings for each account. This approach is especially useful for community managers, gamers, or anyone needing to manage multiple identities on Discord.
How to Run Multiple Discord Accounts with Multilogin
Getting started with Multilogin is straightforward, and it’s designed to make managing multiple accounts as easy as possible:
• Download and Install Multilogin: Visit the official Multilogin website and download the app for your operating system. It’s available on all major platforms.
• Register Your Account: Sign up with Google or your email, and you’ll get a free 7-day trial to explore all the features.
• Create a Browser Profile: In the app, create a new browser profile and assign a proxy to keep your accounts private and secure. Each profile should have a different IP address to ensure they remain separate.
• Log into Discord: Open your new browser profile and log into your Discord account. There’s no need to re-enter credentials—Multilogin keeps everything saved and secure.
• Manage Multiple Profiles: Create as many browser profiles as you need, with each one linked to a different Discord account. You can run all these profiles simultaneously, with each session fully isolated.
• Customize Your Workspace: Organize your profiles with folders, tags, and notes to make managing multiple accounts as easy as possible. You can even share profiles with team members, ensuring everyone has access to the accounts they need without compromising security.
Frequently Asked Questions
How many accounts can I have on Discord?
There’s no hard limit on the number of accounts you can create on Discord. You can set up separate accounts for different purposes, whether for work, gaming, or community management.
Can people tell I have two Discord accounts?
No, Discord doesn’t reveal that you’re switching between multiple accounts. Each account is treated as a separate identity, so unless someone spots similar behavior patterns, your different accounts remain discreet.
Can I make two Discord accounts with the same email?
No, Discord requires each account to have a unique email address. This is to prevent spam and misuse on the platform.
Is running multiple Discord accounts legal?
Yes, it’s completely legal to have multiple Discord accounts. Just make sure you’re following Discord’s guidelines and not using your accounts for anything that could get them banned.
Final Thought
For most users, Discord’s Account Switcher will handle a few accounts just fine. But if you’re managing more than five accounts or need a more streamlined solution, the methods discussed here offer plenty of options. From simple tricks to advanced tools like Multilogin, there’s a solution that fits every need.
If you’re ready to take your multi-account management to the next level, give Multilogin a try. It’s a powerful, secure, and convenient way to manage multiple Discord accounts without the usual headaches.
Share post | __label__pos | 0.540231 |
Reilly O'Donnell
Reilly's Blog
Follow
Reilly's Blog
Follow
SSR Demystified
Photo by Adrien Converse on Unsplash
SSR Demystified
tldr; it's not complicated at all 🤫
Reilly O'Donnell's photo
Reilly O'Donnell
·Nov 25, 2022·
1 min read
Don't let lingo like SSR/ CSR confuse you -- there's really just two major ways to serve HTML over an HTTP server in JS - either the server sends the HTML to the client (ssr) or the client generates the HTML for itself (csr/ spa) by using the Web API.
To demystify SSR we are going to look at the world's simplest dynamic SSR example:
// Let's create a server
import express from 'express';
const app = express();
function generateHTML() {
const date = new Date();
const localTime = date.toLocaleTimeString();
const localDate = date.toLocaleDateString();
const doc = `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Wow</title>
</head>
<body>
<span>Hello World!</span>
<span> The date is ${localDate}</span>
<span> It's currently: ${localTime} </span>
</body>
</html>
`;
return doc;
}
// Every get request to the '/' page will send the current date and time to the client
app.get('/', (req, res) => {
const doc = generateHTML();
res.send(doc);
});
const port = process?.env?.PORT ?? 3000;
app.listen(3000, () => {
console.log(`🚀 Live on http://localhost:${port}`);
});
That's it.
Share this | __label__pos | 0.696074 |
Skip to content
Advertisement
PHP: How to allow users to change background color of their profile?
I have made a social networking site. One of the feature I would like to add is to allow users to change the background image of their profile.I have written javascript for it, but once I change background image of one user, all users background change to that image. What should I do to avoid this.
Javascript Code for switching background
Change background option in Settings page
Advertisement
Answer
You should load the background image dynamically for each user.
This is just a streamline of what I think you should do:
1. Create a field in the DB for every user to hold the path for the bg image
2. Once logged in check if they have that setted on their profile
3. Change it dynamically using js
Advertisement | __label__pos | 0.915034 |
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
I Question about the gaps between prime numbers
1. Jun 25, 2016 #1
Is there any prime number pn, such that it has a relationship with the next prime number pn+1
[tex] p_{n+1} > p_{n}^2 [/tex]
If not, is there any proof saying a prime like this does not exist?
I have the exact same question about this relation:
[tex] p_{n+1} > 2p_{n} [/tex]
2. jcsd
3. Jun 25, 2016 #2
fresh_42
Staff: Mentor
https://en.wikipedia.org/wiki/Prime_gap
There is also a proof for arbitrary gaps, but see the section "upper bounds".
4. Jun 25, 2016 #3
Stephen Tashi
User Avatar
Science Advisor
5. Jun 26, 2016 #4
Interesting.Bertrand's Postulate answers the second part of my question. :)
I see Firoozbakht's conjecture, which is similar to my first part, but it's not quite the same thing as
[tex] p_{n+1} > p_{n}^2 [/tex]
I wonder if this can be proved or disproved from other postulates...
6. Jun 26, 2016 #5
micromass
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
This also follows very easily from Bertrand's postulate.
7. Jun 26, 2016 #6
Stephen Tashi
User Avatar
Science Advisor
Compare ##2p_n## to ##p^2_n## .
8. Jun 26, 2016 #7
Yeah it does. Wow. I'm dumb. :p
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
Similar Discussions: Question about the gaps between prime numbers
1. Prime numbers theorem (Replies: 1)
Loading... | __label__pos | 0.539638 |
Properties
Label 4-249-1.1-c1e2-0-0
Degree $4$
Conductor $249$
Sign $1$
Analytic cond. $0.0158764$
Root an. cond. $0.354967$
Motivic weight $1$
Arithmetic yes
Rational yes
Primitive yes
Self-dual yes
Analytic rank $0$
Origins
Downloads
Learn more
Normalization:
Dirichlet series
L(s) = 1 − 2·2-s − 2·3-s + 4-s + 4·6-s − 7-s + 4·9-s + 11-s − 2·12-s + 2·14-s + 16-s − 17-s − 8·18-s − 6·19-s + 2·21-s − 2·22-s − 2·25-s − 5·27-s − 28-s + 3·29-s + 3·31-s + 2·32-s − 2·33-s + 2·34-s + 4·36-s − 5·37-s + 12·38-s + 4·41-s + ⋯
L(s) = 1 − 1.41·2-s − 1.15·3-s + 1/2·4-s + 1.63·6-s − 0.377·7-s + 4/3·9-s + 0.301·11-s − 0.577·12-s + 0.534·14-s + 1/4·16-s − 0.242·17-s − 1.88·18-s − 1.37·19-s + 0.436·21-s − 0.426·22-s − 2/5·25-s − 0.962·27-s − 0.188·28-s + 0.557·29-s + 0.538·31-s + 0.353·32-s − 0.348·33-s + 0.342·34-s + 2/3·36-s − 0.821·37-s + 1.94·38-s + 0.624·41-s + ⋯
Functional equation
\[\begin{aligned}\Lambda(s)=\mathstrut & 249 ^{s/2} \, \Gamma_{\C}(s)^{2} \, L(s)\cr =\mathstrut & \, \Lambda(2-s) \end{aligned}\]
\[\begin{aligned}\Lambda(s)=\mathstrut & 249 ^{s/2} \, \Gamma_{\C}(s+1/2)^{2} \, L(s)\cr =\mathstrut & \, \Lambda(1-s) \end{aligned}\]
Invariants
Degree: \(4\)
Conductor: \(249\) = \(3 \cdot 83\)
Sign: $1$
Analytic conductor: \(0.0158764\)
Root analytic conductor: \(0.354967\)
Motivic weight: \(1\)
Rational: yes
Arithmetic: yes
Character: Trivial
Primitive: yes
Self-dual: yes
Analytic rank: \(0\)
Selberg data: \((4,\ 249,\ (\ :1/2, 1/2),\ 1)\)
Particular Values
\(L(1)\) \(\approx\) \(0.1315495070\)
\(L(\frac12)\) \(\approx\) \(0.1315495070\)
\(L(\frac{3}{2})\) not available
\(L(1)\) not available
Euler product
\(L(s) = \displaystyle \prod_{p} F_p(p^{-s})^{-1} \)
$p$$\Gal(F_p)$$F_p(T)$
bad3$C_1$$\times$$C_2$ \( ( 1 - T )( 1 + p T + p T^{2} ) \)
83$C_1$$\times$$C_2$ \( ( 1 - T )( 1 + p T^{2} ) \)
good2$D_{4}$ \( 1 + p T + 3 T^{2} + p^{2} T^{3} + p^{2} T^{4} \)
5$C_2^2$ \( 1 + 2 T^{2} + p^{2} T^{4} \)
7$D_{4}$ \( 1 + T - 2 T^{2} + p T^{3} + p^{2} T^{4} \)
11$C_2$$\times$$C_2$ \( ( 1 - 5 T + p T^{2} )( 1 + 4 T + p T^{2} ) \)
13$C_2^2$ \( 1 - 2 T^{2} + p^{2} T^{4} \)
17$C_2$$\times$$C_2$ \( ( 1 - 2 T + p T^{2} )( 1 + 3 T + p T^{2} ) \)
19$C_2$$\times$$C_2$ \( ( 1 - 2 T + p T^{2} )( 1 + 8 T + p T^{2} ) \)
23$C_2^2$ \( 1 - 26 T^{2} + p^{2} T^{4} \)
29$C_2^2$ \( 1 - 3 T + 32 T^{2} - 3 p T^{3} + p^{2} T^{4} \)
31$D_{4}$ \( 1 - 3 T + 30 T^{2} - 3 p T^{3} + p^{2} T^{4} \)
37$C_2$$\times$$C_2$ \( ( 1 - 6 T + p T^{2} )( 1 + 11 T + p T^{2} ) \)
41$D_{4}$ \( 1 - 4 T - 2 T^{2} - 4 p T^{3} + p^{2} T^{4} \)
43$D_{4}$ \( 1 - 8 T + 70 T^{2} - 8 p T^{3} + p^{2} T^{4} \)
47$D_{4}$ \( 1 + 4 T - 22 T^{2} + 4 p T^{3} + p^{2} T^{4} \)
53$C_2$ \( ( 1 + 2 T + p T^{2} )^{2} \)
59$D_{4}$ \( 1 + 13 T + 106 T^{2} + 13 p T^{3} + p^{2} T^{4} \)
61$D_{4}$ \( 1 - 3 T + 48 T^{2} - 3 p T^{3} + p^{2} T^{4} \)
67$C_2$$\times$$C_2$ \( ( 1 - 12 T + p T^{2} )( 1 + 6 T + p T^{2} ) \)
71$D_{4}$ \( 1 - 6 T + 94 T^{2} - 6 p T^{3} + p^{2} T^{4} \)
73$C_2$$\times$$C_2$ \( ( 1 - 4 T + p T^{2} )( 1 + 10 T + p T^{2} ) \)
79$D_{4}$ \( 1 + 6 T + 54 T^{2} + 6 p T^{3} + p^{2} T^{4} \)
89$C_2$$\times$$C_2$ \( ( 1 - 10 T + p T^{2} )( 1 + 8 T + p T^{2} ) \)
97$D_{4}$ \( 1 - 10 T + 82 T^{2} - 10 p T^{3} + p^{2} T^{4} \)
show more
show less
\(L(s) = \displaystyle\prod_p \ \prod_{j=1}^{4} (1 - \alpha_{j,p}\, p^{-s})^{-1}\)
Imaginary part of the first few zeros on the critical line
−19.9583362020, −19.2532545300, −18.8911406702, −18.3038693797, −17.5495656198, −17.3478671179, −16.7780994692, −15.9022638292, −15.5120887195, −14.4574259478, −13.4301254485, −12.5870403887, −12.0374010604, −11.0180573664, −10.4121692020, −9.68468363219, −8.95320072320, −8.02996189451, −6.82879614383, −6.02680203091, −4.45419105482, 4.45419105482, 6.02680203091, 6.82879614383, 8.02996189451, 8.95320072320, 9.68468363219, 10.4121692020, 11.0180573664, 12.0374010604, 12.5870403887, 13.4301254485, 14.4574259478, 15.5120887195, 15.9022638292, 16.7780994692, 17.3478671179, 17.5495656198, 18.3038693797, 18.8911406702, 19.2532545300, 19.9583362020
Graph of the $Z$-function along the critical line | __label__pos | 0.981219 |
J Dor J Dor - 3 months ago 16
Java Question
Create a stub of 3rd party Java library
My task is to create stubs for a 3rd party Java library that our application will make calls to. My problem is how to define the class of the method "return type" (if that's the correct Java terminology). I don't have access to the full documentation of the 3rd party API, just a list of methods. For now, my stubs just need to return true/false or 1/0, whatever
Here's an example of one method to illustrate. This is what I have been given
OobResponse RequestOobRequest(
String ClientName,
String SecurityLink,
short LenofHHU,
RequestMode RequestMode)
I have no idea what OobResponse or RequestMode are supposed to be, but I should still be able to create stubs, right?
So far, this is all I have.
public class stubber {
public class OobResponse {
public int someVar;
}
public class RequestMode {
public int someVar;
}
public OobResponse RequestOobRequest(
String ClientName,
String SecurityLink,
short LenofHHU,
RequestMode RequestMode)
{
OobResponse oobr = new OobResponse();
return oobr;
}
}
Answer
The documentation you have is weird, since variable and method names do not hold Java convention of using camelCase. Also, what you seem to be ordered to do would hold minimal later use. However, the way I understand your problem you could do:
• create new package for all classes you will be stubbing. That will be relevant later
• actually stub stuff. That is, for every class in the documentation that is not built into java create the class. I assumed that what you wrote is a method declaration (made most sense to me, though it could also be a constructor or whatever), it needs to be a part of some class, I called it "Unknown" below. Replace that name with actual class name.
For your example you would need:
public class RequestMode {
}
public class OobResponse {
}
public class Unknown {
public OobResponse RequestOobRequest(
String ClientName,
String SecurityLink,
short LenofHHU,
RequestMode RequestMode){
return new OobResponse(); // or null, whatever since it is a stub
}
}
Note, that when stubbing you do not create any additional variables (like someVar you tried to add), ONLY what API allows you to access (only classes and public methods within would be a good rule of a thumb). You could also use interfaces instead of classes, it would be cleaner, but there are legitimate reasons not to (when you want a code with new StubbedClass() to compile for example).
Now, in your actual code you (or someone) will be able to use your stubs like the actual library:
public class YourBusinessClass{
public OobResponse getOobByClientName(String clientName){
return new Unknown().RequestOobRequest(clientName,...);
}
}
• When you get the actual library you can replace imports from stub package in your actual code that uses it to the actual library package.
That is the only usefull way of using stubs like that I could think of, so I hope that is what you want. | __label__pos | 0.957973 |
toolkit/components/extensions/test/mochitest/test_ext_cookies_containers.html
author Andrea Marchesini <[email protected]>
Fri, 28 Oct 2016 10:16:06 +0200
changeset 319973 691162eba71737f765cb0dda5470a3a516e0d47d
permissions -rw-r--r--
Bug 1302697 - Containers and WebExtensions - part 2 - Cookie API, r=kmag
<!DOCTYPE HTML>
<html>
<head>
<title>WebExtension test</title>
<script type="text/javascript" src="chrome://mochikit/content/tests/SimpleTest/SimpleTest.js"></script>
<script type="text/javascript" src="chrome://mochikit/content/tests/SimpleTest/SpawnTask.js"></script>
<script type="text/javascript" src="chrome://mochikit/content/tests/SimpleTest/ExtensionTestUtils.js"></script>
<script type="text/javascript" src="chrome_head.js"></script>
<script type="text/javascript" src="head.js"></script>
<link rel="stylesheet" type="text/css" href="chrome://mochikit/content/tests/SimpleTest/test.css"/>
</head>
<body>
<script type="text/javascript">
"use strict";
add_task(function* setup() {
// make sure userContext is enabled.
return SpecialPowers.pushPrefEnv({"set": [
["privacy.userContext.enabled", true],
]});
});
add_task(function* test_cookie_containers() {
function background() {
function assertExpected(expected, cookie) {
for (let key of Object.keys(cookie)) {
browser.test.assertTrue(key in expected, `found property ${key}`);
browser.test.assertEq(expected[key], cookie[key], `property value for ${key} is correct`);
}
browser.test.assertEq(Object.keys(expected).length, Object.keys(cookie).length, "all expected properties found");
}
const TEST_URL = "http://example.org/";
const THE_FUTURE = Date.now() + 5 * 60;
let expected = {
name: "name1",
value: "value1",
domain: "example.org",
hostOnly: true,
path: "/",
secure: false,
httpOnly: false,
session: false,
expirationDate: THE_FUTURE,
storeId: "firefox-container-1",
};
browser.cookies.set({url: TEST_URL, name: "name1", value: "value1",
expirationDate: THE_FUTURE, storeId: "firefox-container-1"})
.then(cookie => {
browser.test.assertEq("firefox-container-1", cookie.storeId, "the cookie has the correct storeId");
return browser.cookies.get({url: TEST_URL, name: "name1"});
})
.then(cookie => {
browser.test.assertEq(null, cookie, "get() without storeId returns null");
return browser.cookies.get({url: TEST_URL, name: "name1", storeId: "firefox-container-1"});
})
.then(cookie => {
assertExpected(expected, cookie);
return browser.cookies.getAll({storeId: "firefox-default"});
})
.then(cookies => {
browser.test.assertEq(0, cookies.length, "getAll() with default storeId returns an empty array");
return browser.cookies.getAll({storeId: "firefox-container-1"});
})
.then(cookies => {
browser.test.assertEq(1, cookies.length, "one cookie found for matching domain");
assertExpected(expected, cookies[0]);
return browser.cookies.remove({url: TEST_URL, name: "name1", storeId: "firefox-container-1"});
})
.then(details => {
assertExpected({url: TEST_URL, name: "name1", storeId: "firefox-container-1"}, details);
return browser.cookies.get({url: TEST_URL, name: "name1", storeId: "firefox-container-1"});
})
.then(cookie => {
browser.test.assertEq(null, cookie, "removed cookie not found");
})
.then(() => {
browser.test.notifyPass("cookies");
});
}
let extension = ExtensionTestUtils.loadExtension({
background,
manifest: {
permissions: ["cookies", "*://example.org/"],
},
});
yield extension.startup();
yield extension.awaitFinish("cookies");
yield extension.unload();
});
</script>
</body>
</html> | __label__pos | 0.997226 |
Animation.onremove
Experimental
This is an experimental technology
Check the Browser compatibility table carefully before using this in production.
The Animation interface's onremove property (from the Web Animations API) is the event handler for the remove event. This event is sent when the animation is removed (i.e., put into an active replace state).
Syntax
var removeHandler = animation.onremove;
animation.onremove = removeHandler;
Value
A function to be called to handle the remove event, or null if no remove event handler is set.
Examples
In our simple replace indefinite animations demo, you can see the following code:
const divElem = document.querySelector('div');
document.body.addEventListener('mousemove', evt => {
let anim = divElem.animate(
{ transform: `translate(${ evt.clientX}px, ${evt.clientY}px)` },
{ duration: 500, fill: 'forwards' }
);
anim.commitStyles();
//anim.persist()
anim.onremove = function() {
console.log('Animation removed');
}
console.log(anim.replaceState);
});
Here we have a <div> element, and an event listener that fires the event handler code whenever the mouse moves. The event handler sets up an animation that animates the <div> element to the position of the mouse pointer. This could result in a huge animations list, which could create a memory leak. For this reason, modern browsers automatically remove overriding forward filling animations.
A console message is logged each time an animation it removed, invoked when the remove event is fired.
Specifications
Specification
Web Animations Level 2 (Web Animations 2)
# dom-animation-onremove
Browser compatibility
BCD tables only load in the browser
See also | __label__pos | 0.621896 |
Jump to content
• 0
Go to solution Solved by Sharad Talvalkar,
Bayes' theorem
Bayes' Theorem describes how the conditional probability of each of a set of possible causes for a given observed outcome can be computed from knowledge of the probability of each cause and the conditional probability of the outcome of each cause
An application-oriented question on the topic along with responses can be seen below. The best answer was provided by Sharad Talvalkar on 05th August 2019.
Applause for the respondents- Mohamed Asif, Manjula Pujar, Sharad Talvalkar, Ram Rajagopalan & Sreyash Sangam
Question
Q. 182 Explain Prior Probability and Posterior Probability along with application of Bayes' theorem in a business scenario.
Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday.
Link to post
Share on other sites
6 answers to this question
Recommended Posts
• 0
• Solution
Explain prior probability & posterior Probability along with application of Bayes Theorem in a business scenario.
Prior probability : Probability is an intricate subject. Therefore, initially the concepts of Probability are introduced / explained with the help of orderly examples where the outcomes are known to us by applying simple logic/ common sense. E.g. Tossing a fair coin, rolling an unbiased dice, drawing a card from a deck. In all these aforesaid cases we can make our probability statements even prior to conducting any experiment. We know that the probability of getting a Head when a coin is tossed is 0.5 ( 50%) Hence such classical cases are known as Prior Probabilities where the outcome is known even before the experiment is conducted.
Classical approach defines the probability of getting either Head or Tail when a coin is tossed as
image.png.f5b4314ba69e356405b4e4a17aeb4f10.png
This approach to probability is useful when we deal with Coin tosses, card games , dice game etc. Real life situations in management are not so straight forward & therefore one has to define probability in a different way.
Posterior Probability : At the beginning of the World Cup Cricket Match 2019, Indian Fans were very confident (99%) that India will win the world cup. As the matches progressed, some key players like Shikhar Dhawan , Vijay Shankar got injured & they could not participate . Hence after getting this additional information, Indian Fans revised the probability ( let’s say 80% )of winning a world cup. This revised probability after getting additional information is known as Posterior Probability.
A similar situation also occurs in a business scenario. A shopkeeper may order various colors of Jacket ( Blue, Black, Grey etc.) based on the past consumption pattern. As time progresses, he may notice that the sale of Jackets is not as per his expectation & so after getting this input the shopkeeper may change his ordering pattern of Jackets. This revised ordering pattern is an example of Posterior Probability.
Bayes Theorem: Bayes formula for conditional probability under dependence is as follows
image.png.fe6d79ee7cf47c433605e6cfd53c93a5.png
Let us now understand the application of Bayes Theorem in a business scenario with the help of following example
Suppose there are three machines ( M1,M2 & M3), each of them producing a same component, say X. Production from M1, M2 & M3 is 40%, 49% & 11%. If there is a customer complaint from the market what is the probability that it is from M1, M2 & M3.
Using simple logic, we can say that the probability of defective coming from M1,M2 & M3 is 0.4, 0.49 & 0.11 respectively.
Now we have additional information that the defectives from M1,M2 & M3 are 0.5%, 3% & 2%. In this scenario when a complaint comes from the market what is the probability that the defective is coming from M1,M2 & M3 .
Using Bayes Theorem, we can say that
image.png.60d42aeab161b64c2e78f1f196054c0f.png
Using Bayes Theorem, we can say that
In the above example our initial probability of getting defectives from M1,M2 & M3 was 0.4, 0.49 & 0.11.respectively. This probability is the Prior Probability.
Later, after getting additional information that the defectives from M1,M2 & M3 are 0.5%, 3% & 2% we have revised the probability of getting defectives from M1,M2, M3 to 0.1058, 0.7778 & 0.1164. This revised probability is known as Posterior Probability.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Formula - 1.JPG
image.png
Bayes Theorem.docx
Link to post
Share on other sites
• 0
Covering some basics:
Posterior Probability (Conditional Probability):
We use this, when we have new considerations based on recent updated data and wanted to update the probability of the event.
fd.jpg.72bf1c7f70119a5179bcc0a192b180e9.jpg
Under Normal Distributed prior and likelihood probabilities, we shall be able to describe posterior probability with a function. This is referred as Closed Form Solution.
It is often referred as revised probability
D1.png.c691964eb292648d47fb378eb2e4f41e.png
Where,
A and B are events;
P(A) is Probability of A occurring;
P(B) is Probability of B occurring;
P(A|B) is Conditional probability of A (given that B occurs)
P(B|A) is Conditional probability of B (given that A occurs)
P(A) & P(B) are Prior Probabilities
Note:
Posterior probability is calculated by updating prior probability
To make it simple, Posterior Probability = Prior Probability + New Evidence
Considering an example of Gold Rate and Rupee Strength,
Suppose, gold rate increased 70% of the time and gold rate decreased 30% of the time.
i.e., P(Increase) = 0.7; P(decrease) = 0.3
In recent past after demonetization, given rupee lose strength against dollar, gold price increased 80% of the time and given rupee gained strength, gold price decreased 40% of the time
i.e., P(loss | Increase) = 0.8 and P(Gain | decrease) = 0.4
Here, P(Increase) and P(decrease) are Prior Probabilities
And, P(loss | Increase) and P(Gain | decrease) are Conditional Probability
Now getting into results,
D2.jpg.2f1785d3022b2e7447829bd429b3c7de.jpg
So, Probability of Gold price will increase given for weak rupee is 0.76
We can apply Bayesian approach in below business scenarios widely to predict outcomes,
• Marketing and R&D
• Pricing Decision
• New Product Development
• Logistics
• Promotional Campaigns
Taking Bayesian stat to next level, is to leap into MCMC, Markov Chain Monte Carlo methods.
This methods helps in finding posterior distribution of the metrics. These algorithms generates simulations to find the metric parameter.
We can write simple codes to get the system calculate the probability
sss.jpg.3994017125d2806dda92ee3f10feb99b.jpg
(Python code)
Link to post
Share on other sites
• 0
Bayes Theorem
Bayes theorem is about probability of 2 or more parameters.
If parameter of distribution, can be estimated which can not be fixed one. It may be random variable.
Bayes theorem works on probability rules
Normal probabability :
B
A
If A and B are two boxes and we have to select one box
Then probability of A and B
Probability(A)è P(A) = 1/2 è A/(A+B)
Probability(B)è P(B) = 1/2 è B/(A+B)
Suppose there are 3 Boxes A Band C
Then P(A)=1/3 P(B)=1/3 P(C)=1/3
Let’s say possibilities of selecting box A is 55% , box B is 30% and box C is 15%
Then
P(A)= 55/100=0.55
P(B)= 30/100=0.3
P(C)= 15/100=0.15
Conditional probability:
P=(X/A)
X=Selecting element
A= Selecting element from something.
EX: Suppose there are 5 Red balls and 2 white balls in box . probability of selecting red boxes from box is
P=(X/A) = 5/7
Prior probability
Prior probability is estimated possibilities. In this case we set probability without knowing the actual data. It is estimated by deducting reasoning. It can be stated through
Principle of indifferences
For Example how many votes a politician may get in coming election is prior probability.
Posterior probability
Posterior probability estimated on basis of previous data. Depending on past trend according to required frequency probability is calculated. Posterior probability is estimated considering prior probability in additional to new evidence.
For example there is Garment shop in which sold highest count of Red Shirts in last year. According to that this year also Red Shirt production are made. But This year Trend is of yellow shirts. Now posterior probability to be calculated as we know past trend and also changes i.e P(A/B)= P(A*B)/P(B)
Prior and Posterior probability can be explained by one more example
image.png
A team Plays in Home ground and also outside. 60% plays on Home ground and 40$ played on Outside. When it played at home ground 75% of matches are won. And 25% are lost. When Game was played by team outside 45% of matches are Won and 55% are lost.
Now Team ins WON this is Given.
Probability in more win are at home ground
P(H/W)=?
P(H/W)=P(W/H)*P(H)
------------------
P(W/H)*P(H)+P(W/H)’*P(H)’
P(W/H)è probability of winning in home series
P(H) è probability of games played in Home series
P(W/H)’è probability of winning in outside
P(H)’ è probability of games played outside
Substitute the values
0.75*0.6
---------------------------
(0.75*0.6)+(0.45*0.4)
= 0.45 0.45
-------------------- = ------------
0.45+0.18 0.63
= 71.4% probability if our team is won then played in home ground
Link to post
Share on other sites
• 0
Prior probability doesn't consider all background information for predicting an event to happen. Posterior probability considers all background information to calculate probability of event.
An example of Bayes in Retail e-commerce is based on user profile and searches predict the product he/she is looking for and make recommendations.
Link to post
Share on other sites
• 0
Bayes theorem is highly applicable in business scenarios wherever we want to find the probability of occurrence of any event when we have certain clues and guides regarding the processes impacting the outcome of happening of any event. Bayes theorem is closely associated with the Prior and Posterior probability in which the all the evidence and data associated with the occurrence of an event is well known in advance and that is primarily used to calculate the probability of occurrence of an event.
One of the example associated with the manufacturing of textile machinery wherein the Bayes theorem applicability can be tested is: the consumption or procurement of the textile machinery are dependent on several factors. Lets say the most important factors among all those is the tax exemption announced by the Ministry of textiles for textile promotion. This is one of the probability with which the Original Equipment manufacturer can determine the probability of selling of the textile machinery.
Thus Bayes theorem is associated with the degree of belief of a certain process to achieve certain specification. It can be accounted in two scenarios : Pre and Post gathering the evidence. once the probability is calculated before gathering the evidence it is called is Prior probability calculation and in case, probability is calculated after gathering the evidence, it is called posterior probability calculation.
Link to post
Share on other sites
• 0
The chosen best answer is that of Sharad Talvalkar for providing clear explanation and multiple examples. For more examples, refer to Manjula and Asef's answers and for simple understanding of the concept, read through Sreyash and Ram's answers.
Link to post
Share on other sites
Guest
This topic is now closed to further replies.
• Who's Online (See full list)
There are no registered users currently online
• Forum Statistics
• Total Topics
2,868
• Total Posts
14,517
• Member Statistics
• Total Members
55,049
• Most Online
888
Newest Member
Akshita Varma
Joined
×
×
• Create New... | __label__pos | 0.546521 |
5
$\begingroup$
Background: When proving that the group of $k$-isogenies $\mathrm{Hom}_k(A,B)$ between two abelian varieties is finitely generated, one first shows that the Tate map $$\mathbb{Z}_\ell\otimes_{\mathbb{Z}} M \to \mathrm{Hom}_{\mathbb{Z}_\ell}(T_\ell A,T_\ell B)$$ is injective. Since each Tate module is free of finite rank over $\mathbb{Z}_\ell$, it follows that the localization $M_\ell$ is $\mathbb{Z}_\ell$-finite. One then uses a little trick to deduce the $\mathbb{Z}$-finiteness of $M$ itself. (See Silverman I, for example.)
The above proof needs only a single prime $\ell$, but disregarding issues of the characteristic of the field (which are apparently surmountable) we actually have an injective Tate map at every prime. Thus...
Question: Can the $\mathbb{Z}$-finiteness of $M$ be deduced directly from the $\mathbb{Z}_\ell$-finiteness of $M_\ell$ for all primes $\ell$?
One can consider this a question about general torsion-free abelian groups $M$. A non-counterexample to keep in mind is $M=\mathbb{Z}[1/p]$, for which $M_\ell$ is $\mathbb{Z}_\ell$-finite for all $\ell\neq p$.
(A google search shows that there is actually quite a body of literature on torsion-free abelian groups, so perhaps the answer to this question is well-known, but I'm not sure where to look...)
$\endgroup$
• $\begingroup$ good point - fixed $\endgroup$ – Sam Lichtenstein Nov 6 '09 at 22:08
• 1
$\begingroup$ You dealt with my comment completely so I deleted the comment. In some sense this is a confusing aspect of this site. Witness another question which currently looks like "Question" "answer" "comment that this answer is clearly wrong" "comment that it was right once, but then the question changed." $\endgroup$ – Kevin Buzzard Nov 7 '09 at 9:02
• $\begingroup$ latex adjusted to mathjax. $\endgroup$ – András Bátkai Jul 8 '13 at 21:59
16
$\begingroup$
I don't think so. Let M be the additive subgroup of the rationals consisting of rationals with squarefree denominator.
$\endgroup$
• $\begingroup$ Note that if $M$ is countable, then under the hypotheses it is a subgroup of some $\mathbb{Q}^n$. So in some sense all counterexamples are of this form. $\endgroup$ – Greg Kuperberg Nov 6 '09 at 22:14
• $\begingroup$ The hypotheses imply M is countable. For M embeds into V:=M tensor_Z Q, and V tensor_Q Q_p is finite-dimensional over Q_p, so V is finite-dimensional over Q. $\endgroup$ – Kevin Buzzard Nov 6 '09 at 22:22
• $\begingroup$ Kevin, this example also appears near the end of Bass' famous paper "Big projective modules are free". $\endgroup$ – BCnrd Feb 25 '10 at 23:18
6
$\begingroup$
The question is confusing. Presumably, by finite you mean finitely generated, but it's not clear what you mean by localization at l --- you seem to mean tensor with Zl. If M is torsion free and becomes finitely generated when tensored with Zl for one l, then obviously it is finitely generated (linearly independent elements will remain linearly independent). However, when you prove that Hom(A,B) is finitely generated, the first step is to show that Hom(A,B) injects into Hom(TlA,TlB). The harder step is to show that Hom(A,B) tensor Zl injects into it.
$\endgroup$
1
$\begingroup$
I was reading Milne's ``Abelian Varieties'' notes this week and had almost this exact same question regarding the proof that Hom(A,B) is a free $\mathbb{Z}$-module. An internet search revealed this post and I felt that I have a thought to contribute. In particular, I believe that the proofs found in {Silverman 1, Milne, Mumford} that Hom(A,B) is a free $\mathbb{Z}$-module may be omiting a small and subtle but important step.
For instance, Sam Lichtenstein originally posted above that in Silverman's Arithmetic of Elliptic Curves, ``one then uses a little trick to deduce the $\mathbb{Z}$-finiteness of $M$ itself'', where $M$ is Hom(A,B). The little trick is quoted here for those who do not have Silverman in front of them:
Begin Silverman:
Since Hom($E_1$,$E_2$) is torsion-free, it follows that $$\mbox{rank}_\mathbb{Z} \mbox{Hom}(E_1,E_2) = \mbox{rank}_{\mathbb{Z}_l} \mbox{Hom}(E_1,E_2)\otimes \mathbb{Z}_l,$$ in the sense that if one is finite, then they both are and they are equal.
End Silverman
My complaint is that the left-hand side does not make sense because we have not established much about Hom($E_1$,$E_2$). All we know is that Hom($E_1$,$E_2$) is torsion free abelian group. This does not seem sufficient to define $\mathbb{Z}$-rank. For example, what is the $\mathbb{Z}$-rank of $\mathbb{Q}$? Any two nonzero rational numbers are linearly dependent over $\mathbb{Z}$, and since $\mathbb{Q}$ is torsion-free we must conclude that $\mathbb{Q}$ has $\mathbb{Z}$-rank 1, so $\mathbb{Q} \simeq \mathbb{Z}$ (?!?!).
In Mumford, the proof that Hom(A,B) is a finitely generated free $\mathbb{Z}$-module appears to be the following progression of steps, each with its own detailed proof except for step 4:
1. Hom(A,B) is torsion-free
2. If $M$ is a finitely generated submodule of Hom(A,B), then $(M\otimes\mathbb{Q}) \cap \mbox{Hom}(A,B)$ is finitely generated.
3. $\mbox{Hom}(A,B) \otimes \mathbb{Z}_l$ is a free $\mathbb{Z}_l$-module for all $l \neq p$, where $p$ is the characteristic of the field
4. Steps 1-3 obviously now imply that Hom(A,B) is a free $\mathbb{Z}$-module
Step 4 is the step I was unable to follow at first. This is because step 2 holds the key to step 4 in a way that is somewhat subtle. For example, consider the torsion-free abelian group $N \subset \mathbb{Q}$ consisting of all rational numbers with denominators with $l$-adic valuation 0 or 1 for all primes $l$. That is, $N$ is the set of all $a/b$ where gcd$(a,b) = 1$ and the prime factorization of $b$ is $b = p_1p_2\cdots p_t$, $p_i \neq p_j$ for $i \neq j$. $N \otimes \mathbb{Z}_l$ is isomorphic to the principal fractional ideal $(1/l)\mathbb{Z}_l$. Since we only care about the $\mathbb{Z}_l$-module structure of $N \otimes \mathbb{Z}_l$, we see that $(1/l)\mathbb{Z}_l$ is a free $\mathbb{Z}_l$-module of rank 1, where the isomorphism $(1/l)\mathbb{Z}_l \rightarrow \mathbb{Z}_l$ is given by multiplication by $l$. $N$ is not finitely generated and thus would provide a counterexample if step 2 were not important because $N$ satisfies step 1 and 3. However, it fails step 2. If $M$ is a nonzero finitely generated submodule of $N$, then $$(M \otimes \mathbb{Q}) \cap N = \mathbb{Q} \cap N = N$$ and $N$ is not finitely generated. Mumford pays lipservice to the use of step 2 to prove step 4, but he does not fully explain.
What I think is missing is something like the following proposition: ``If $N \subseteq \mathbb{Q}$ is a subgroup satisfying axiom 2, then $N$ is finitely generated''. Prove this by contradiction similar to the previous paragraph. Let $M \subseteq N$ be a finitely-generated submodule and observe that $M \otimes \mathbb{Q} = \mathbb{Q}$, hence $M$ is finitely generated if and only if $N$ is finitely generated.
$\endgroup$
Your Answer
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
Not the answer you're looking for? Browse other questions tagged or ask your own question. | __label__pos | 0.997456 |
fft.C
Go to the documentation of this file.
1 /*---------------------------------------------------------------------------*\
2 ========= |
3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
4 \\ / O peration |
5 \\ / A nd | www.openfoam.com
6 \\/ M anipulation |
7 -------------------------------------------------------------------------------
8 Copyright (C) 2011-2015 OpenFOAM Foundation
9 Copyright (C) 2016-2018 OpenCFD Ltd.
10 -------------------------------------------------------------------------------
11 License
12 This file is part of OpenFOAM.
13
14 OpenFOAM is free software: you can redistribute it and/or modify it
15 under the terms of the GNU General Public License as published by
16 the Free Software Foundation, either version 3 of the License, or
17 (at your option) any later version.
18
19 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
20 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
21 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
22 for more details.
23
24 You should have received a copy of the GNU General Public License
25 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>.
26
27 \*---------------------------------------------------------------------------*/
28
29 #include "fft.H"
30 #include <fftw3.h>
31
32 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
33
35 (
37 List<complex>& renumData,
38 const UList<int>& nn,
39 label nnprod,
40 label ii,
41 label l1,
42 label l2
43 )
44 {
45 if (ii == nn.size())
46 {
47 // We've worked out the renumbering scheme. Now copy
48 // the components across
49
50 data[l1] = complex(renumData[l2].Re(), renumData[l2].Im());
51 }
52 else
53 {
54 // Do another level of folding. First work out the
55 // multiplicative value of the index
56
57 nnprod /= nn[ii];
58 label i_1(0);
59
60 for (label i=0; i<nn[ii]; i++)
61 {
62 // Now evaluate the indices (both from array 1 and to
63 // array 2). These get multiplied by nnprod to (cumulatively)
64 // find the real position in the list corresponding to
65 // this set of indices
66
67 if (i < nn[ii]/2)
68 {
69 i_1 = i + nn[ii]/2;
70 }
71 else
72 {
73 i_1 = i - nn[ii]/2;
74 }
75
76
77 // Go to the next level of recursion
78
79 fftRenumberRecurse
80 (
81 data,
82 renumData,
83 nn,
84 nnprod,
85 ii+1,
86 l1+i*nnprod,
87 l2+i_1*nnprod
88 );
89 }
90 }
91 }
92
93
95 {
96 List<complex> renumData(data);
97
98 label nnprod(1);
99 forAll(nn, i)
100 {
101 nnprod *= nn[i];
102 }
103
104 label ii(0), l1(0), l2(0);
105
107 (
108 data,
109 renumData,
110 nn,
111 nnprod,
112 ii,
113 l1,
114 l2
115 );
116 }
117
118
121 {
122 const label n = field.size();
123 const label nBy2 = n/2;
124
125 // Copy of input field for use by fftw
126 // - require non-const access to input and output
127 // - use double to avoid additional libfftwf for single-precision
128
129 List<double> in(n);
130 List<double> out(n);
131
132 for (label i=0; i < n; ++i)
133 {
134 in[i] = field[i];
135 }
136
137 // Using real to half-complex fftw 'kind'
138 fftw_plan plan = fftw_plan_r2r_1d
139 (
140 n,
141 in.data(),
142 out.data(),
143 FFTW_R2HC,
144 FFTW_ESTIMATE
145 );
146
147 fftw_execute(plan);
148
149 // field[0] = DC component
150 auto tresult = tmp<complexField>::New(nBy2 + 1);
151 auto& result = tresult.ref();
152
153 result[0].Re() = out[0];
154 result[nBy2].Re() = out[nBy2];
155 for (label i = 1; i < nBy2; ++i)
156 {
157 result[i].Re() = out[i];
158 result[i].Im() = out[n - i];
159 }
160
161 fftw_destroy_plan(plan);
162
163 return tresult;
164 }
165
166
168 (
169 const tmp<scalarField>& tfield
170 )
171 {
172 tmp<complexField> tresult = realTransform1D(tfield());
173 tfield.clear();
174 return tresult;
175 }
176
177
179 (
181 const UList<int>& nn,
183 )
184 {
185 // Copy field into fftw containers
186 const label N = field.size();
187
188 fftw_complex* inPtr =
189 static_cast<fftw_complex*>(fftw_malloc(sizeof(fftw_complex)*N));
190 fftw_complex* outPtr =
191 static_cast<fftw_complex*>(fftw_malloc(sizeof(fftw_complex)*N));
192
193 // If reverse transform : renumber before transform
194 if (dir == REVERSE_TRANSFORM)
195 {
196 fftRenumber(field, nn);
197 }
198
199 forAll(field, i)
200 {
201 inPtr[i][0] = field[i].Re();
202 inPtr[i][1] = field[i].Im();
203 }
204
205 // Create the plan
206 // FFTW_FORWARD = -1
207 // FFTW_BACKWARD = 1
208
209 // 1-D plan
210 // fftw_plan plan =
211 // fftw_plan_dft_1d(N, in, out, FFTW_FORWARD, FFTW_ESTIMATE);
212
213 // Generic 1..3-D plan
214 const label rank = nn.size();
215 fftw_plan plan =
216 fftw_plan_dft(rank, nn.begin(), inPtr, outPtr, dir, FFTW_ESTIMATE);
217
218 // Compute the FFT
219 fftw_execute(plan);
220
221 forAll(field, i)
222 {
223 field[i].Re() = outPtr[i][0];
224 field[i].Im() = outPtr[i][1];
225 }
226
227 fftw_destroy_plan(plan);
228
229 fftw_free(inPtr);
230 fftw_free(outPtr);
231
232 // If forward transform : renumber after transform
233 if (dir == FORWARD_TRANSFORM)
234 {
235 fftRenumber(field, nn);
236 }
237 }
238
239
240 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
241
243 (
244 const tmp<complexField>& tfield,
245 const UList<int>& nn
246 )
247 {
248 auto tresult = tmp<complexField>::New(tfield);
249
250 transform(tresult.ref(), nn, FORWARD_TRANSFORM);
251
252 tfield.clear();
253
254 return tresult;
255 }
256
257
259 (
260 const tmp<complexField>& tfield,
261 const UList<int>& nn
262 )
263 {
264 auto tresult = tmp<complexField>::New(tfield);
265
266 transform(tresult.ref(), nn, REVERSE_TRANSFORM);
267
268 tfield.clear();
269
270 return tresult;
271 }
272
273
275 (
276 const tmp<complexVectorField>& tfield,
277 const UList<int>& nn
278 )
279 {
280 auto tresult = tmp<complexVectorField>::New(tfield().size());
281
282 for (direction cmpt=0; cmpt<vector::nComponents; cmpt++)
283 {
284 tresult.ref().replace
285 (
286 cmpt,
287 forwardTransform(tfield().component(cmpt), nn)
288 );
289 }
290
291 tfield.clear();
292
293 return tresult;
294 }
295
296
298 (
299 const tmp<complexVectorField>& tfield,
300 const UList<int>& nn
301 )
302 {
303 auto tresult = tmp<complexVectorField>::New(tfield().size());
304
305 for (direction cmpt=0; cmpt<vector::nComponents; cmpt++)
306 {
307 tresult.ref().replace
308 (
309 cmpt,
310 reverseTransform(tfield().component(cmpt), nn)
311 );
312 }
313
314 tfield.clear();
315
316 return tresult;
317 }
318
319
320 // ************************************************************************* //
Foam::roots::complex
Definition: Roots.H:57
Foam::component
void component(FieldField< Field, typename FieldField< Field, Type >::cmptType > &sf, const FieldField< Field, Type > &f, const direction d)
Definition: FieldFieldFunctions.C:44
Foam::tmp::clear
void clear() const noexcept
Definition: tmpI.H:325
Foam::tmp
A class for managing temporary objects.
Definition: PtrList.H:59
Foam::fft::transformDirection
transformDirection
Definition: fft.H:60
Foam::transform
dimensionSet transform(const dimensionSet &ds)
Return the argument; transformations do not change the dimensions.
Definition: dimensionSet.C:519
Foam::UList::begin
iterator begin()
Return an iterator to begin traversing the UList.
Definition: UListI.H:276
forAll
#define forAll(list, i)
Loop across all elements in list.
Definition: stdFoam.H:296
n
label n
Definition: TABSMDCalcMethod2.H:31
Foam::fft::reverseTransform
static tmp< complexField > reverseTransform(const tmp< complexField > &field, const UList< int > &nn)
Definition: fft.C:259
Foam::Im
scalarField Im(const UList< complex > &cf)
Extract imag component.
Definition: complexField.C:172
Foam::Field< scalar >
field
rDeltaTY field()
Foam::fft::fftRenumber
static void fftRenumber(List< complex > &data, const UList< int > &nn)
Definition: fft.C:94
Foam::fft::fftRenumberRecurse
static void fftRenumberRecurse(List< complex > &data, List< complex > &renumData, const UList< int > &nn, label nnprod, label ii, label l1, label l2)
Definition: fft.C:35
Foam::fft::transform
static void transform(complexField &field, const UList< int > &nn, transformDirection fftDirection)
Transform complex-value data.
Definition: fft.C:179
Foam::List
A 1D array of objects of type <T>, where the size of the vector is known and used for subscript bound...
Definition: HashTable.H:102
Foam::UList
A 1D vector of objects of type <T>, where the size of the vector is known and can be used for subscri...
Definition: HashTable.H:103
Foam::Re
scalarField Re(const UList< complex > &cf)
Extract real component.
Definition: complexField.C:159
Foam::fft::forwardTransform
static tmp< complexField > forwardTransform(const tmp< complexField > &field, const UList< int > &nn)
Definition: fft.C:243
Foam::tmp::New
static tmp< T > New(Args &&... args)
Construct tmp of T with forwarding arguments.
Foam::direction
uint8_t direction
Definition: direction.H:47
Foam::UList::size
void size(const label n) noexcept
Override size to be inconsistent with allocated storage.
Definition: UListI.H:360
N
const Vector< label > N(dict.get< Vector< label >>("N"))
fft.H
Foam::data
Database for solution data, solver performance and other reduced data.
Definition: data.H:54
Foam::VectorSpace< Vector< scalar >, scalar, 3 >::nComponents
static constexpr direction nComponents
Number of components in this vector space.
Definition: VectorSpace.H:101
Foam::fft::realTransform1D
static tmp< complexField > realTransform1D(const scalarField &field)
Transform real-value data.
Definition: fft.C:120 | __label__pos | 0.956512 |
Credit Card Tokenization: What It Is, How It Works
Tokenization replaces your sensitive card data with a jumble of letters and numbers that are useless to a hacker.
Updated
Profile photo of Lindsay Konsko
Written by Lindsay Konsko
Many, or all, of the products featured on this page are from our advertising partners who compensate us when you take certain actions on our website or click to take an action on their website. However, this does not influence our evaluations. Our opinions are our own. Here is a list of our partners and here's how we make money.
It's the credit card holder's nightmare: Hackers break into a merchant's computer system and steal credit card information, which they use to charge thousands of dollars' worth of stuff to your account. But imagine if instead of your name, card number, expiration date and other information, the hackers just got a meaningless jumble of numbers and letters.
That's credit card tokenization in action, and it's a key way payment systems can keep your card data safe.
1. What is tokenization?
In general, to “tokenize” something means to replace it with something else that represents the original but that is useless outside a certain context.
Think about going to a carnival and buying tokens to play games. Each token represents a certain amount of money, and as long as you're at the carnival, you can use the tokens like money for skee-ball, for video games, or perhaps to buy a funnel cake.
But you can’t use them once you leave the fair. The tokens have no value outside it.
Stop fraud in its tracks
With a NerdWallet account, you can see all of your credit card activity in one place and easily access your credit report to spot any red flags quickly.
2. How does tokenization work with credit cards?
Say you're buying something from a merchant that uses tokenization. If there's a tokenization system in place, it intercepts your card data and replaces it with a random string of numbers and letters. Instead of Jane Smith, account number 4567 8910 1112 1314, expiration date 10/2025, there's a token like HX46YT794RG.
Merchant systems are often the weakest link in the chain of computer networks involved in a credit card purchase. The huge data breaches you hear about typically occur at merchants that store credit card data, not the banks or payment networks that handle the card transactions. With tokenization, the only data stored on the merchant's network is the token. The sensitive card data itself is stored on a server with much higher security. The token is basically a link to that data.
A hacker who steals a token from a merchant's system will find that it is worthless. It was valid only for a purchase at that merchant. Outside that context, like game tokens outside the arcade, it's unusable.
3. Is this the same as EMV technology?
The EMV chips embedded in modern credit cards operate on the same general principle. The chips generate a unique, one-time-use code for each purchase. But EMV chips work only with in-person transactions. When you give your number to an online merchant, the chip doesn't do anything. When an online merchant is using tokenization, though, your card data has protection similar to that offered by an EMV chip.
For an example of a system that uses tokenization, look at your phone. Apple Pay, Google Pay and other digital wallets operate on a tokenization system. Your credit cards aren't really "stored" in the digital wallet. What are? Tokens that link to your card information. These tokens don't work exactly like merchant tokenization, but the concept is the same.
4. Who benefits from credit card tokenization?
Everyone, really, except maybe for hackers.
Let’s start with consumers. Maybe data breaches are inevitable, but if one occurred at a merchant where you had used your card, tokenization would make it much less of a hassle. Because your card data was never stored by that merchant, only the token, you wouldn't need to get a new card with a new number. You wouldn't have to provide that new number everywhere you're using the card for automated payments — utilities, Netflix, Amazon, Uber, etc.
For merchants, credit card issuers and payment networks, tokenization reduces fraud, which reduces the cost of doing business.
Find the right credit card for you.
Whether you want to pay less interest or earn more rewards, the right card's out there. Just answer a few questions and we'll narrow the search for you.
Get Started
Get more smart money moves – straight to your inbox
Sign up and we’ll send you Nerdy articles about the money topics that matter most to you along with other ways to help you get more from your money. | __label__pos | 0.921738 |
Math 19
1 / 69
Term:
Definition:
Show example sentence
Show hint
Keyboard Shortcuts
• Previous
• Next
• F Flip card
Complete list of Terms and Definitions for Math 19
Terms Definitions
variable hensuu
4 4
Addition Sum
4x1 4
3X4 12
point a location
5x5 Figure it out!
5 x 10 50
parallelogram area area = lh
AREA OF A TRAPEZPID A12HB1B2
12 x 7 12 x 8
Denominator Bottom number of a fraction.
SKEW LINES DIFFERENT PLANES THAT NEVER INTERSECT
money any circulating medium of exchange, including coins, paper money, and demand deposits.
plane a flat surface that extends indefinitely in all directions and has no thickness
rhombus A parallelogram with four equal sides and with corners that need not be square.
parallelogram a quadrilateral with both pairs of opposite sides parallel
triangle a closed plane figure having three sides and three angles.
minuend a number from which another is subtracted.
logarithms y = log(little b)x asks b^y = x
interval a space of time between events or states.
sphere a solid figure with all points the same distance from the center point
change to give or get smaller money in exchange for:
proportional two quantities having the same or a constant ratio or relation
Multiplication Multypling two or more numbers by each other.
Recursive ranking use method x to determine first place winner...eliminate and use same method to determine second and third place.
factor pair a pair of numbers whose product equals a given number
vertical angles one of two opposite and equal angles formed by the intersection of two lines.
Enter your front text here. Enter your back text here.
In a square root, what is the number under the radical sign (ã) called? The radicand.
dividend a number that is to be divided by a divisor.
Surface Area of a Cone (Pie) (Radius Squared) +(Pie x Radius x Slant Height)
parallel lines if two lines in the same plane do not intersect, we say that the lines are
The top number in a fraction tells how many what? parts or shares we have
Lemma bounded subset of E to the n...union of finite Let S be a bounded subset of E to the n. Then for any epsilon>0, S is contained in the union of a finite number of closed balls of radius epsilon.
65+25 90
12 144
13+2 15
12x7= 84
1PT= _____CUPS 2
4 x 3 12
linear a straight line
3 x 4 12
one right angle Right triangle
fraction represents part of a whole
quadrilateral a polygon with 4 sides
mid-point formula [(x₁+ x₂)/2 , (y₁+ y₂)/2]
rate a ratio that compares unlike units
sum aggregate of two or more numbers
numeral a written symbol that names a number.
linear inequality a mathematical sentence that describes a region of the coordinate plane having a boundary line each point in the region is a solution of the inequality
product the result obtained by multiplying two or more quantities together.
outcome A possible result of a probability experiment
acute angle an angle less than 90 degrees
right angle angle that is formed when you haveperpendicular lines
r x-axis, r y-axis R over the the origin
finite a set of elements capable of being completely counted and not zero
multiply to make many or manifold; increase the number, quantity, etc., of.
Divisor the number by which the dividend is divided
Triangular numbers the numbers in the pattern in the previous investigation
a triangle that has at least 2 congruent sides isosceles triangle
hundredth one part of 100 equal parts of a whole
quadrant 1 of the 4 regions that a coordinate plane is divided into by the x- axis and the y- axis
trapezoid area area = [(base1 + base2) / 2] · h
What are the reciprocal trigonometric functions? Cosecant = 1/sine Secant = 1/cosine Cotangent = 1/tangent
measure of a minor arc the measure of its central angle
SOHCAHTOA, say what???(What are the trigonometric functions?) SOH: Sine Opposite Hypo CAH: Cosine Adjacent Hypotenuse TOA: Tangent Opposite Adjacent
multiplying positive(positive) positive
solution of the system of linear equations any ordered pair in a system that makes all equations true | __label__pos | 0.930025 |
Creating WhatsApp Clone Using Firebase
Creating WhatsApp Clone Using Firebase
In this tutorial, we will be making use of Firebase to create a WhatsApp clone.
In this tutorial, we will be making use of Firebase to create a WhatsApp clone.
Prerequisites
I will not be going through the entire detail of implementing this app, mainly just the logic and code files. This tutorial assumes that you do have an existing knowledge of working with simple apps for iOS, and we will build on that. But do not fret, I will include the entire source code below for your reference if you with to learn it line by line.
For this tutorial I have used XCode 10 and Swift 4.2.
Tutorial
Let’s first create a new project, FakeChat in any folder you like. You can choose to include or exclude Core Data / Unit / UI Tests, as I will not be covering them here.
Creating New Project with Pods
Create a single view app FakeChat
Next we will be installing various pods that will be used in this tutorial:
pod init
open Podfile -a Xcode
Add the required pods
pod install
Now that we have installed the required dependencies, let’s create a new Firebase Project.
Setting Up Firebase
Head over here and press Go To Console on the top right (make sure you are logged in).
After that, click Add Project with default settings and locations. In production apps, you may want to change the location depending on your needs.
Select Project Overview and add a new iOS App to the Project, Make sure to use your iOS Bunde ID, since it has to be unique for our app to work. Replace com.bilguun.FakeChat with something unique to you such as com.yourorgname.FakeChat
Click on Register app and download the GoogleService-Info.plist. We will add this to the root of our project.
Make sure to add FakeChat as our target
Now the only thing we really need to do is to add the the following in our AppDelegate.swift file didFinishLaunchingWithOption method.
FirebaseApp.configure()
let database = Database.database().reference()
database.setValue("Testing")
Creating Database
Now Firebase currently offers 2 types of databases that support cloud syncing. These are Cloud Firestore and *Realtime Database. *Essentially Cloud Firestore is an upgraded Realtime Database.
Cloud Firestore is Firebase’s new flagship database for mobile app development. It improves on the successes of the Realtime Database with a new, more intuitive data model. Cloud Firestore also features richer, faster queries and scales better than the Realtime Database. It is easier to scale and can model complex models and is better overall in the long run. However, to keep things simple, we will be sticking to the Realtime Database.
Realtime Database
Next, go back to Firebase Console and do the following:
This will create a *Realtime Database *in testing mode, which means, users do not have to be authenticated to read and write into this database. We will be changing the rules eventually but let’s go ahead with this so we can test our app.
Go ahead and run our iOS app on a simulator. Once it has started up, when u click on Database in Firebase, you should see something like below:
Our set value method worked!
Great! Once our app has finished launching, we have referenced the root of our realtime database, and set a new value Testing
We will now set aside Firebase Database, and come back to it again when we are ready to send messages. Let us now implement our ViewControllers and Signup / Login logic for our users.
Registering and Logging in
Let’s go to Authentication and click on Email / Password and enable that. What we are doing here is that we are giving the ability to our users to signup using email or password. We won’t checking the validity of the emails or authenticity of the users in this tutorial.
Firebase also has a lot more options to allow users to signup / login, feel free to explore that and incorporate that in your app.
Creating ViewControllers
This will be our initial storyboard
Let’s go about and create our initial storyboard. Here we will have just 2 screens embedded in Navigation Controller. Our welcome screen has input fields for email and password. We can then either login or register. Once we have done either one of those, we can present our Chats ViewController
See the screen recording above to get the gist of the current flow.
Handling Registration and Logging in
//
// ViewController.swift
// FakeChat
//
// Created by Bilguun Batbold on 23/3/19.
// Copyright © 2019 Bilguun. All rights reserved.
//
import UIKit
import NotificationCenter
import Firebase
import SVProgressHUD
class ViewController: UIViewController {
@IBOutlet weak var buttonStackView: UIStackView!
@IBOutlet weak var emailTextField: UITextField!
@IBOutlet weak var passwordTextField: UITextField!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
@IBAction func signUpOrLoginDidTap(_ sender: UIButton) {
// try and get the required fields
guard let email = emailTextField.text, let password = passwordTextField.text else {
//show alert if not filled
let alert = UIAlertController(title: "Error", message: "Please ensure required fields are filled", preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "Ok", style: .default, handler: nil))
self.present(alert, animated: true, completion: nil)
return
}
//button tags have been set in the storyboard 0 -> register 1 -> Login
switch sender.tag {
case 0:
registerUser(email: email, password: password)
case 1:
loginUser(email: email, password: password)
default:
return
}
}
private func registerUser(email: String, password: String) {
SVProgressHUD.show(withStatus: "Registering..")
//create user and wait for callback
Auth.auth().createUser(withEmail: email, password: password) { (result, error) in
if error != nil {
print(error?.localizedDescription as Any)
}
else {
// if not error, navigate to next page
self.performSegue(withIdentifier: "showChat", sender: self)
}
SVProgressHUD.dismiss()
}
}
private func loginUser(email: String, password: String) {
SVProgressHUD.show(withStatus: "Logging in..")
Auth.auth().signIn(withEmail: email, password: password) { (result, error) in
if error != nil {
print(error?.localizedDescription as Any)
}
else {
self.performSegue(withIdentifier: "showChat", sender: self)
}
SVProgressHUD.dismiss()
}
}
@IBAction func unwindToLogin(_ unwindSegue: UIStoryboardSegue) {
do {
try Auth.auth().signOut()
print("user signed out")
}
catch {
print("Error signing out")
}
emailTextField.text?.removeAll()
passwordTextField.text?.removeAll()
}
}
Update your main ViewController.swift to be like this. Make sure to connect the IBOutlets and IBActions in the storyboard to prevent crashing.
Let’s now run the app and register a new user
Enter whatever email you want and a password. Click register and the app should take you to the ChatsViewController after a brief delay.
Authentication page
In the Firebase, refresh the Authentication page and you should see a new user that we have just registered. What we have is:
• Identifier — Email we used
• Providers — Icon showing what type of authentication it is
• Created — Created date
• Signed In — Date user last signed in
• User UUID — Unique identifier assigned to each user
• Password column not shown since it is sensitive data. It will be hashed and stored accordingly.
Go back to the main page and try logging in. Once the user has successfully logged in, we once again show the ChatsViewController.
ChatsViewController
Looks pretty decent!
This is what we will be implementing in our ChatsViewController. The basic idea is as follows:
1. Create a custom model that will hold message, incoming, sender
2. Create custom table view cell to define message alignment and background colour based on the model received. If there sender is not you, show the sender name on top of the message
3. Display the cells in the table view.
//
// ChatMessageCell.swift
// FakeChat
//
// Created by Bilguun Batbold on 23/3/19.
// Copyright © 2019 Bilguun. All rights reserved.
//
import Foundation
import UIKit
class ChatMessageCell: UITableViewCell {
let messageLabel = UILabel()
let messageBgView = UIView()
// change background view colour accordingly
var isIncoming: Bool = false {
didSet {
messageBgView.backgroundColor = isIncoming ? UIColor.white : #colorLiteral(red: 0.8823529412, green: 0.968627451, blue: 0.7921568627, alpha: 1)
}
}
override init(style: UITableViewCell.CellStyle, reuseIdentifier: String?) {
super.init(style: style, reuseIdentifier: reuseIdentifier)
addSubview(messageBgView)
addSubview(messageLabel)
messageBgView.translatesAutoresizingMaskIntoConstraints = false
messageBgView.layer.cornerRadius = 7
messageLabel.numberOfLines = 0
messageLabel.translatesAutoresizingMaskIntoConstraints = false
// set constraints for the message and the background view
let constraints = [
messageLabel.topAnchor.constraint(equalTo: topAnchor, constant: 24),
messageLabel.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -24),
messageBgView.topAnchor.constraint(equalTo: messageLabel.topAnchor, constant: -16),
messageBgView.leadingAnchor.constraint(equalTo: messageLabel.leadingAnchor, constant: -16),
messageBgView.bottomAnchor.constraint(equalTo: messageLabel.bottomAnchor, constant: 16),
messageBgView.trailingAnchor.constraint(equalTo: messageLabel.trailingAnchor, constant: 16)
]
NSLayoutConstraint.activate(constraints)
selectionStyle = .none
backgroundColor = .clear
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
// what we will call from our tableview method
func configure(with model: MessageModel) {
isIncoming = model.isIncoming
if isIncoming {
guard let sender = model.sender else {return}
// align to the left
let nameAttributes = [
NSAttributedString.Key.foregroundColor : UIColor.orange,
NSAttributedString.Key.font : UIFont.boldSystemFont(ofSize: 16)
] as [NSAttributedString.Key : Any]
// sender name at top, message at the next line
let senderName = NSMutableAttributedString(string: sender + "\n", attributes: nameAttributes)
let message = NSMutableAttributedString(string: model.message)
senderName.append(message)
messageLabel.attributedText = senderName
messageLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 32).isActive = true
messageLabel.trailingAnchor.constraint(equalTo: trailingAnchor, constant: -32).isActive = false
}
else {
// align to the right
messageLabel.text = model.message
messageLabel.trailingAnchor.constraint(equalTo: trailingAnchor, constant: -32).isActive = true
messageLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 32).isActive = false
}
}
}
// message struct
struct MessageModel {
let message: String
let sender: String?
let isIncoming: Bool
}
Chat Message Cell
//
// ChatsViewController.swift
// FakeChat
//
// Created by Bilguun Batbold on 23/3/19.
// Copyright © 2019 Bilguun. All rights reserved.
//
import UIKit
class ChatsViewController: UIViewController {
//chatcell identifier
private let cellId = "chatCell"
//mock data to display
private let messages = [MessageModel.init(message: "My first message", sender: "User 1", isIncoming: true), MessageModel.init(message: "Somewhat maybe a long message about how my day was", sender: "User 1", isIncoming: true), MessageModel.init(message: "Very lengthy message on what exactly happened to me the whole day and how I have spent my weekend off just doing some coding and writing tutorials", sender: nil, isIncoming: false)]
@IBOutlet weak var tableView: UITableView!
@IBOutlet weak var textFieldViewHeight: NSLayoutConstraint!
override func viewDidLoad() {
super.viewDidLoad()
setup()
}
func setup() {
//set the delegates
tableView.delegate = self
tableView.dataSource = self
tableView.register(ChatMessageCell.self, forCellReuseIdentifier: cellId)
// do not show separators and set the background to gray-ish
tableView.separatorStyle = .none
tableView.backgroundColor = UIColor(white: 0.95, alpha: 1)
// extension of this can be found in the ViewController.swift
// basically hides the keyboard when tapping anywhere
hideKeyboardOnTap()
}
}
extension ChatsViewController: UITableViewDelegate, UITableViewDataSource {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return messages.count
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: cellId, for: indexPath) as! ChatMessageCell
cell.configure(with: messages[indexPath.row])
return cell
}
}
extension ChatsViewController: UITextFieldDelegate {
//handle when keyboard is shown and hidden
func textFieldDidBeginEditing(_ textField: UITextField) {
UIView.animate(withDuration: 0.3) {
self.textFieldViewHeight.constant = 308
self.view.layoutIfNeeded()
}
}
func textFieldDidEndEditing(_ textField: UITextField) {
UIView.animate(withDuration: 0.3) {
self.textFieldViewHeight.constant = 50
self.view.layoutIfNeeded()
}
}
}
Chats View Controller
This is all we really need to display the messages accordingly. Let’s now connect our TableView data to Firebase! We will send a message and ensure that we can get it back on another simulator.
Connecting to the Firebase database
We are going to add 2 new methods to be able to communicate with Firebase Database:
1. Create a custom model that will hold message, incoming, sender
2. Create custom table view cell to define message alignment and background colour based on the model received. If there sender is not you, show the sender name on top of the message
3. Display the cells in the table view.
I will not bore you with too many words, so here is the actual implementation
//
// ChatsViewController.swift
// FakeChat
//
// Created by Bilguun Batbold on 23/3/19.
// Copyright © 2019 Bilguun. All rights reserved.
//
import UIKit
import Firebase
class ChatsViewController: UIViewController {
//chatcell identifier
private let cellId = "chatCell"
private var messages = [MessageModel]()
let messageDB = Database.database().reference().child("Messages")
//MARK: Outlets
@IBOutlet weak var tableView: UITableView!
@IBOutlet weak var textFieldViewHeight: NSLayoutConstraint!
@IBOutlet weak var messageTextField: UITextField!
@IBOutlet weak var sendButton: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
setup()
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
messageDB.removeAllObservers()
}
func setup() {
//set the delegates
tableView.delegate = self
tableView.dataSource = self
tableView.register(ChatMessageCell.self, forCellReuseIdentifier: cellId)
// do not show separators and set the background to gray-ish
tableView.separatorStyle = .none
tableView.backgroundColor = UIColor(white: 0.95, alpha: 1)
getMessages()
// extension of this can be found in the ViewController.swift
// basically hides the keyboard when tapping anywhere
hideKeyboardOnTap()
}
// call this to listen to database changes and add it into our tableview
func getMessages() {
messageDB.observe(.childAdded) { (snapshot) in
let snapshotValue = snapshot.value as! Dictionary<String, String>
guard let message = snapshotValue["message"], let sender = snapshotValue["sender"] else {return}
let isIncoming = (sender == Auth.auth().currentUser?.email ? false : true)
let chatMessage = MessageModel.init(message: message, sender: sender, isIncoming: isIncoming)
self.addNewRow(with: chatMessage)
}
}
// function to add our cells with animation
func addNewRow(with chatMessage: MessageModel) {
self.tableView.beginUpdates()
self.messages.append(chatMessage)
let indexPath = IndexPath(row: self.messages.count-1, section: 0)
self.tableView.insertRows(at: [indexPath], with: .top)
self.tableView.endUpdates()
}
//MARK: Buttons
@IBAction func sendButtonDidTap(_ sender: Any) {
// return if message does not exist
guard let message = messageTextField.text else {return}
if message == "" {
return
}
//stop editing the message
messageTextField.endEditing(true)
// disable the buttons to avoid complication for simplicity
messageTextField.isEnabled = false
sendButton.isEnabled = false
let messageDict = ["sender": Auth.auth().currentUser?.email, "message" : message]
messageDB.childByAutoId().setValue(messageDict) { (error, reference) in
if error != nil {
print(error?.localizedDescription as Any)
}
else {
print("Message sent!")
//enable the buttons and remove the text
self.messageTextField.isEnabled = true
self.sendButton.isEnabled = true
self.messageTextField.text?.removeAll()
}
}
}
}
// MARK: - TableView Delegates
extension ChatsViewController: UITableViewDelegate, UITableViewDataSource {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return messages.count
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: cellId, for: indexPath) as! ChatMessageCell
cell.configure(with: messages[indexPath.row])
return cell
}
}
//MARK: - TextField Delegates
extension ChatsViewController: UITextFieldDelegate {
//handle when keyboard is shown and hidden
func textFieldDidBeginEditing(_ textField: UITextField) {
UIView.animate(withDuration: 0.3) {
self.textFieldViewHeight.constant = 308
self.view.layoutIfNeeded()
}
}
func textFieldDidEndEditing(_ textField: UITextField) {
UIView.animate(withDuration: 0.3) {
self.textFieldViewHeight.constant = 50
self.view.layoutIfNeeded()
}
}
}
extension ChatsViewController {
func hideKeyboardOnTap() {
let tap: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(dismissKeyboard(_:)))
tap.cancelsTouchesInView = false
tableView.addGestureRecognizer(tap)
}
@objc func dismissKeyboard(_ sender: UITapGestureRecognizer) {
view.endEditing(true)
if let navController = self.navigationController {
navController.view.endEditing(true)
}
}
}
Go through the code and try to understand what exactly happened. The new methods are sendButtonDidTap and getMessages. That is all required to properly communicate with our Firebase Database. Go ahead and run the app, register 2 users if you have not done so, and login with them on a simulator or your phone. The end result should be something like this:
The messages are sent instantly, and received instantly as well.
Concluding Thoughts
About our app
Yes I know, although our WhatsApp clone kind of works, it does not have any idea / concept of friends. Which means at this stage, the ChatsViewController acts like 1 huge group where all your registered members can send and receive messages. In order to incorporate the idea of sending messages to friends / groups / rooms, our Database structure will need to be changed to facilitate that. Perhaps I will give an update on how you can achieve that using Firebase in the near future. If any one does want to know how that can be done, feel free to let me know as well.
Firebase can be a really powerful tool to get started with real time information exchange if you do not have the necessary skills or resources to get your own server. In the future I will update this or create a new tutorial that covers implementing our own service using Sockets / MongoDB instead of Firebase. But to get started, Firebase provides a super neat way of allowing real time information sharing.
The final source code can be found here.
If anyone finds these useful, feel free to share this or let me know should there be an error / bad practice / implementations.
Have fun coding!
firebase ios
Bootstrap 5 Complete Course with Examples
Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners
Nest.JS Tutorial for Beginners
Hello Vue 3: A First Look at Vue 3 and the Composition API
Building a simple Applications with Vue 3
Deno Crash Course: Explore Deno and Create a full REST API with Deno
How to Build a Real-time Chat App with Deno and WebSockets
Convert HTML to Markdown Online
HTML entity encoder decoder Online
What is firebase,firebase bangla tutorial.
LIKE | COMMENT | SHARE | SUBSCRIBE The Firebase Realtime Database is a cloud-hosted NoSQL database that lets you store and sync data between your users in re...
Firebase Introduction with C#
LIKE | COMMENT | SHARE | SUBSCRIBE Firebase is a mobile and web application development platform developed by Firebase, Inc. in 2011, then acquired by Google...
Create database into firebase
LIKE | COMMENT | SHARE | SUBSCRIBE In this video, I will show you how to Create database into #firebase console. Subscribe & Stay Connected! Thank You! ♥ #Fi...
Save Employee into firebase
LIKE | COMMENT | SHARE | SUBSCRIBE In this video, I will show you how to save employee information into firebase database. Subscribe & Stay Connected! Thank ...
Hire iOS App Developer
Are you looking to transform your idea into an iPhone application? Hire iPhone programmer team from **[HourlyDeveloper.io](https://hourlydeveloper.io/ "HourlyDeveloper.io")** to ensure the best results, utilizing all the latest trends in iOS app... | __label__pos | 0.951561 |
next up previous
Next: Motorola 68HC11 SCI Interface Up: Serial Communication Previous: Asynchronous Serial Communication (SCI)
RS-232 Serial Protocol
The RS-232 serial communication protocol is a standard protocol used in asynchronous serial communication. It is the primary protocol used over modem lines. It is the protocol used by the MicroStamp11 when it communicates with a host PC.
Figure 23 shows the relationship between the various components in a serial ink. These components are the UART, the serial channel, and the interface logic. An interface chip known as the universal asynchronous receiver/transmitter or UART is used to implement serial data transmission. The UART sits between the host computer and the serial channel. The serial channel is the collection of wires over which the bits are transmitted. The output from the UART is a standard TTL/CMOS logic level of 0 or 5 volts. In order to improve bandwidth, remove noise, and increase range, this TTL logical level is converted to an RS-232 logic level of $-12$ or $+12$ volts before being sent out on the serial channel. This conversion is done by the interface logic shown in figure 23. In your system the interface logic is implemented by the comm stamp.
Figure 23: Asynchronous (RS-232) serial link
\begin{figure}\centerline{\psfig{file=figs/serial-link.eps,width=4in}} \end{figure}
A frame is a complete and nondivisible packet of bits. A frame includes both information (e.g., data and characters) and overhead (e.g., start bit, error checking and stop bits). In asynchronous serial protocols such as RS-232, the frame consists of one start bit, seven or eight data bits, parity bits, and stop bits. A timing diagram for an RS-232 frame consisting of one start bit, 7 data bits, one parity bits and two stop bits is shown below in figure 24. Note that the exact structure of the frame must be agreed upon by both transmitter and receiver before the comm-link must be opened.
Figure 24: RS-232 Frame (1 start bit, 7 data bits, 1 parity bits, and 2 stop bits)
\begin{figure}\centerline{\psfig{file=figs/rs232-frame.eps,width=4in}} \end{figure}
Most of the bits in a frame are self-explanatory. The start bit is used to signal the beginning of a frame and the stop bit is used to signal the end of a frame. The only bit that probably needs a bit of explanation is the parity bit. Parity is used to detect transmission errors. For even parity checking, the number of 1's in the data plus the parity bit must equal an even number. For odd parity, this sum must be an odd number. Parity bits are used to detect errors in transmitted data. Before sending out a frame, the transmitter sets the parity bit so that the frame has either even or odd parity. The receiver and transmitter have already agreed upon which type of parity check (even or odd) is being used. When the frame is received, then the receiver checks the parity of the received frame. If the parity is wrong, then the receiver knows an error occurred in transmission and the receiver can request that the transmitter re-send the frame.
In cases where the probability of error is extremely small, then it is customary to ignore the parity bit. For communication between the MicroStamp11 and the host computer, this is usually the case and so we ignore the parity bit.
The bit time is the basic unit of time used in serial communication. It is the time between each bit. The transmitter outputs a bit, waits one bit time and then outputs the next bit. The start bit is used to synchronize the transmitter and receiver. After the receiver senses the true-false transition in the start bit, it waits one half bit time and then starts reading the serial line once every bit time after that. The baud rate is the total number of bits (information, overhead, and idle) per time that is transmitted over the serial link. So we can compute the baud rate as the reciprocal of the bit time.
next up previous
Next: Motorola 68HC11 SCI Interface Up: Serial Communication Previous: Asynchronous Serial Communication (SCI)
Bill Goodwine 2002-09-29 | __label__pos | 0.992672 |
bitcoin
Detecting Objects' Motion in 2 Subsequence Images
I've been posting examples of Hough Transform since last month, and now is the time to switch to other examples.
1. Reading image and comparing 2 images side by side
clear all;clc;
Ia = imread('pic23a.jpg');
Ib = imread('pic23b.jpg');
subplot(121);imshow(Ia);
subplot(122);imshow(Ib);
2. Finding the location of green object in 2 images
a. Set the threshold value for green color
p/s: I can't make the "&" appear in text, so the text aboved is in image format.
b. Find the location of the green object
[y1,x1] = find(Ia_green==1);
[y2,x2] = find(Ib_green==1);
c. Find the centroid of the green object
x1 = round(mean(x1));
y1 = round(mean(y1));
x2 = round(mean(x2));
y2 = round(mean(y2));
3. Putting 2 images together and show the movement of the objects
figure;
imshow(Ia); hold on;
imshow(Ib);
alpha(.5);
plot(x1,y1,'r*');
plot(x2,y2,'ro');
plot([x1,x2],[y1,y2]);
The '*' indicates the start point of the object while the 'o' is the stop point of the object. Other objects' movement could be found by modifying the step number 2. | __label__pos | 0.992275 |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
Dismiss
Skip Navigation
4.4: Horizontal and Vertical Line Graphs
Difficulty Level: At Grade Created by: CK-12
Atoms Practice
Estimated6 minsto complete
%
Progress
Practice Horizontal and Vertical Line Graphs
MEMORY METER
This indicates how strong in your memory this concept is
Practice
Progress
Estimated6 minsto complete
%
Estimated6 minsto complete
%
Practice Now
MEMORY METER
This indicates how strong in your memory this concept is
Turn In
What if you were given the graph of a vertical or horizontal line? How could you write the equation of this line? After completing this Concept, you'll be able to write horizontal and vertical linear equations and graph them in the coordiate plane.
Watch This
CK-12 Foundation: 0404S Graphs of Horizontal and Vertical Lines (H264)
Guidance
How do you graph equations of horizontal and vertical lines? See how in the example below.
Example A
“Mad-cabs” have an unusual offer going on. They are charging $7.50 for a taxi ride of any length within the city limits. Graph the function that relates the cost of hiring the taxi (y) to the length of the journey in miles (x).
To proceed, the first thing we need is an equation. You can see from the problem that the cost of a journey doesn’t depend on the length of the journey. It should come as no surprise that the equation then, does not have x in it. Since any value of x results in the same value of y(7.5), the value you choose for x doesn’t matter, so it isn’t included in the equation. Here is the equation:
y=7.5
The graph of this function is shown below. You can see that it’s simply a horizontal line.
Any time you see an equation of the form “y= constant,” the graph is a horizontal line that intercepts the yaxis at the value of the constant.
Similarly, when you see an equation of the form x= constant, then the graph is a vertical line that intercepts the xaxis at the value of the constant. (Notice that that kind of equation is a relation, and not a function, because each xvalue (there’s only one in this case) corresponds to many (actually an infinite number) yvalues.)
Example B
Plot the following graphs.
(a) y=4
(b) y=4
(c) x=4
(d) x=4
(a) y=4 is a horizontal line that crosses the yaxis at 4.
(b) y=4 is a horizontal line that crosses the yaxis at −4.
(c) x=4 is a vertical line that crosses the xaxis at 4.
(d) x=4 is a vertical line that crosses the xaxis at −4.
Example C
Find an equation for the xaxis and the yaxis.
Look at the axes on any of the graphs from previous examples. We have already said that they intersect at the origin (the point where x=0 and y=0). The following definition could easily work for each axis.
xaxis: A horizontal line crossing the yaxis at zero.
yaxis: A vertical line crossing the xaxis at zero.
So using example 3 as our guide, we could define the xaxis as the line y=0 and the yaxis as the line x=0.
Watch this video for help with the Examples above.
CK-12 Foundation: Graphs of Horizontal and Vertical Lines
Vocabulary
• Horizontal lines are defined by the equation y= constant and vertical lines are defined by the equation x= constant.
• Be aware that although we graph the function as a line to make it easier to interpret, the function may actually be discrete.
Guided Practice
Write the equation of the horizontal line that is 3 units below the x-axis.
Solution:
The horizontal line that is 3 units below the x-axis will intercept the y-axis at y=3. No matter what the value of x, the y value of the line will always be -3. This means that the equations for the line is y=3.
Practice
1. Write the equations for the five lines (A through E) plotted in the graph below.
For 2-10, use the graph above to determine at what points the following lines intersect.
1. A and E
2. A and D
3. C and D
4. B and the \begin{align*}y-\end{align*}axis
5. \begin{align*}E\end{align*} and the \begin{align*}x-\end{align*}axis
6. \begin{align*}C\end{align*} and the line \begin{align*}y = x\end{align*}
7. \begin{align*}E\end{align*} and the line \begin{align*}y = \frac {1} {2} x\end{align*}
8. \begin{align*}A\end{align*} and the line \begin{align*}y = x + 3\end{align*}
9. \begin{align*}B\end{align*} and the line \begin{align*}y=-2x\end{align*}
Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Please to create your own Highlights / Notes
Show More
Vocabulary
Horizontally
Horizontally means written across in rows.
Vertically
Vertically means written up and down in columns.
Image Attributions
Show Hide Details
Description
Difficulty Level:
At Grade
Grades:
Date Created:
Aug 13, 2012
Last Modified:
Apr 11, 2016
Files can only be attached to the latest version of Modality
Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
MAT.ALG.452.L.2
Here | __label__pos | 0.995694 |
一、配置request
1. 相关资料
请求关键参数:stream=True。默认情况下,当你进行网络请求后,响应体会立即被下载。你可以通过 stream 参数覆盖这个行为,推迟下载响应体直到访问 Response.content 属性。
tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'
r = requests.get(tarball_url, stream=True)
此时仅有响应头被下载下来了,连接保持打开状态,因此允许我们根据条件获取内容:
if int(r.headers['content-length']) < TOO_LONG:
content = r.content
...
进一步使用 Response.iter_content 和 Response.iter_lines 方法来控制工作流,或者以 Response.raw 从底层urllib3的 urllib3.HTTPResponse
from contextlib import closing
with closing(requests.get('http://httpbin.org/get', stream=True)) as r:
# Do things with the response here.
保持活动状态(持久连接)
归功于urllib3,同一会话内的持久连接是完全自动处理的,同一会话内发出的任何请求都会自动复用恰当的连接!
注意:只有当响应体的所有数据被读取完毕时,连接才会被释放到连接池;所以确保将 stream 设置为 False 或读取 Response 对象的 content 属性。
2. 下载文件并显示进度条
with closing(requests.get(self.url(), stream=True)) as response:
chunk_size = 1024 # 单次请求最大值
content_size = int(response.headers['content-length']) # 内容体总大小
progress = ProgressBar(self.file_name(), total=content_size,
unit="KB", chunk_size=chunk_size, run_status="正在下载", fin_status="下载完成")
with open(file_name, "wb") as file:
for data in response.iter_content(chunk_size=chunk_size):
file.write(data)
progress.refresh(count=len(data))
二、进度条类的实现
在Python3中,print()方法的默认结束符(end=’\n’),当调用完之后,光标自动切换到下一行,此时就不能更新原有输出。
将结束符改为“\r”,输出完成之后,光标会回到行首,并不换行。此时再次调用print()方法,就会更新这一行输出了。
结束符也可以使用“\d”,为退格符,光标回退一格,可以使用多个,按需求回退。
在结束这一行输出时,将结束符改回“\n”或者不指定使用默认
下面是一个格式化的进度条显示模块。代码如下:
class ProgressBar(object):
def __init__(self, title,
count=0.0,
run_status=None,
fin_status=None,
total=100.0,
unit='', sep='/',
chunk_size=1.0):
super(ProgressBar, self).__init__()
self.info = "【%s】%s %.2f %s %s %.2f %s"
self.title = title
self.total = total
self.count = count
self.chunk_size = chunk_size
self.status = run_status or ""
self.fin_status = fin_status or " " * len(self.status)
self.unit = unit
self.seq = sep
def __get_info(self):
# 【名称】状态 进度 单位 分割线 总数 单位
_info = self.info % (self.title, self.status,
self.count/self.chunk_size, self.unit, self.seq, self.total/self.chunk_size, self.unit)
return _info
def refresh(self, count=1, status=None):
self.count += count
# if status is not None:
self.status = status or self.status
end_str = "\r"
if self.count >= self.total:
end_str = '\n'
self.status = status or self.fin_status
print(self.__get_info(), end=end_str)
三、参考资料
http://www.gaoxuewen.cn/index.php/python/1086.html
http://cn.python-requests.org/en/latest/user/advanced.html
来源: | __label__pos | 0.969081 |
Analysis of IP fragmentation reorganization process
Source: Internet
Author: User
Copyleft of this document belongs to yfydz and can be freely copied and reproduced when published using GPL. It is strictly prohibited to be used for any commercial purposes.
MSN: [email protected]
Source: http://yfydz.cublog.cn
1. Preface
Reorganizing IP fragments is an important way for the firewall to improve security. By reorganizing fragments in advance, it can effectively defend against various fragmentation attacks, the Linux kernel firewall netfilter automatically restructured the IP Fragment package. This article introduces the IP address reorganization process in the Linux kernel, kernel code version 2.4.26.
2. handling process
The basic function for IP address reorganization is ip_defrag (), which is implemented in net/IPv4/ip_fragment.c. The basic process is to create a shard processing queue, and each node in the queue is a linked list, this linked list stores fragments of the same connection. When all fragments arrive, the data packet is reorganized, or all the fragments cannot arrive within a certain period of time (30 seconds by default) and are released.
2.1 Data Structure
When processing the part package, save the cb field of the skb package to the Part Control Information struct ipfrag_skb_cb.
# Define frag_cb (SKB) (struct ipfrag_skb_cb *) (SKB)-> CB ))
Struct ipfrag_skb_cb
{
Struct inet_skb_parm h;
Int offset;
};
Ipq queue node structure:
/* Describe an entry in the "incomplete into Rams" queue .*/
Struct ipq {
// Next
Struct ipq * Next;/* linked list pointers */
// The latest linked list
Struct list_head lru_list;/* LRU list member */
// The following four items are used to match a group of IP addresses.
U32 saddr;
U32 daddr;
2017-11-id;
U8 protocol;
// Status flag
U8 last_in;
# Define complete 4 // The data is complete
# Define first_in 2 // The first package arrives
# Define last_in 1 // the last package arrives
// List of received IP Fragments
Struct sk_buff * fragments;/* linked list of supported ed fragments */
// Len is the total length of data obtained based on the offset information in the latest IP fragment.
Int Len;/* total length of original datasync */
// Meat is the sum of the actual lengths of all fragments.
Int meat;
Spinlock_t lock;
Atomic_t refcnt;
// Timeout
Struct timer_list timer;/* When will this queue expire? */
// The address of the previous queue
Struct ipq ** pprev;
// The index number for the data to enter the NIC
Int IIF;
// Timestamp of the latest shard
Struct timeval stamp;
};
2.2 ip_defrag () function:
This is a basic function for fragment. It returns the restructured SKB package or null.
Struct sk_buff * ip_defrag (struct sk_buff * SKB)
{
Struct iphdr * IPH = SKB-> NH. iph;
Struct ipq * QP;
Struct net_device * dev;
// Statistics
Ip_inc_stats_bh (ipreasmreqds );
/* Start by cleaning up the memory .*/
// Check whether the allocated fragment memory exceeds the configured Upper Limit
If (atomic_read (& ip_frag_mem)> sysctl_ipfrag_high_thresh)
// The ip_evictor () function releases data packets that cannot be reorganized in the current buffer, making ip_frag_mem smaller
// Sysctl_ipfrag_low_thresh (low buffer limit)
Ip_evictor ();
Dev = SKB-> dev;
/* Lookup (or create) queue header */
// Query queue Nodes Based on IP header information
If (QP = ip_find (IPH ))! = NULL ){
Struct sk_buff * ret = NULL;
Spin_lock (& QP-> lock );
// The SKB data packet enters the linked list of the queue Node
Ip_frag_queue (qP, SKB );
If (QP-> last_in = (first_in | last_in )&&
QP-> meat = QP-> Len)
// Reassembles the data packets to meet the regrouping conditions and return the restructured data packets.
Ret = ip_frag_reasm (qP, Dev );
Spin_unlock (& QP-> lock );
// If the number of queue nodes is 0, release the queue Node
Ipq_put (qP );
Return ret;
}
// The related node cannot be found and the packet is discarded.
Ip_inc_stats_bh (ipreasmfails );
Kfree_skb (SKB );
Return NULL;
}
2.3 ip_find () function
The ip_find () function is used to find the queue nodes that match the source, Destination Address, protocol, and ID of the data packet. If the node is not found, a new node is created:
Static inline struct ipq * ip_find (struct iphdr * iph)
{
_ 2010id = IPH-> ID;
_ U32 saddr = IPH-> saddr;
_ U32 daddr = IPH-> daddr;
_ U8 protocol = IPH-> protocol;
// The Shard queue is implemented in the form of a hash table
// The hash function uses four IP header parameters: source, Destination Address, protocol, and ID.
Unsigned int hash = ipqhashfn (ID, saddr, daddr, Protocol );
Struct ipq * QP;
Read_lock (& ipfrag_lock );
For (QP = ipq_hash [hash]; QP = QP-> next ){
If (QP-> id = ID &&
QP-> saddr = saddr &&
QP-> daddr = daddr &&
QP-> protocol = protocol ){
Atomic_inc (& QP-> refcnt );
Read_unlock (& ipfrag_lock );
Return QP;
}
}
Read_unlock (& ipfrag_lock );
// If the node does not exist, create a queue node.
Return ip_frag_create (hash, IPH );
}
Ip_frag_create () function, returns a shard queue Node
Static struct ipq * ip_frag_create (unsigned hash, struct iphdr * iph)
{
Struct ipq * QP;
// Allocate a new shard queue Node
If (QP = frag_alloc_queue () = NULL)
Goto out_nomem;
QP-> protocol = IPH-> protocol;
QP-> last_in = 0;
QP-> id = IPH-> ID;
QP-> saddr = IPH-> saddr;
QP-> daddr = IPH-> daddr;
QP-> Len = 0;
// Meat is the total length of all fragments in the current queue.
QP-> meat = 0;
QP-> fragments = NULL;
QP-> IIF = 0;
/* Initialize a timer for this entry .*/
// Queue node timer settings
Init_timer (& QP-> timer );
QP-> timer. Data = (unsigned long) QP;/* pointer to queue */
// Timeout processing, releasing memory, and sending ICMP fragment timeout error
QP-> timer. Function = ip_expire;/* expire function */
QP-> lock = spin_lock_unlocked;
// The number of nodes used to initialize the queue is 1. Note that the value cannot be 0.
Atomic_set (& QP-> refcnt, 1 );
// Put the shard nodes in the queue hash table
Return ip_frag_intern (hash, qP );
Out_nomem:
Netdebug (if (net_ratelimit () printk (kern_err "ip_frag_create: no memory left
! /N "));
Return NULL;
}
2.4 ip_frag_queue () function
The ip_frag_queue () function inserts the new SKB package into the queue node. This function is the key to defending against various fragment attacks. It must be able to handle the reorganization process of various exceptions:
// Ping of death, Teardrop, and so on are attacked by abnormal fragmentation offset. Therefore, you need to check carefully.
// Whether the part offset is abnormal
Static void ip_frag_queue (struct ipq * Qp, struct sk_buff * SKB)
{
Struct sk_buff * Prev, * next;
Int flags, offset;
Int IHL, end;
// An error is returned when a new packet is sent to the queue node with the complete mark.
If (QP-> last_in & complete)
Goto err;
// Calculate the offset value of the current package. The Offset Value in the IP header is only 13 BITs, but it represents a multiple of 8 bytes.
Offset = ntohs (SKB-> NH. iph-> frag_off );
Flags = offset &~ Ip_offset;
Offset & = ip_offset;
Offset <= 3;/* offset is in 8-byte chunks */
IHL = SKB-> NH. iph-> IHL * 4;
/* Determine the position of this fragment .*/
// End is the position of the end of the current package in the complete package
End = offset + SKB-> len-IHL;
/* Is this the final fragment? */
If (flags & ip_mf) = 0 ){
// No more multipart packages are available.
/* If we already have some bits beyond end
* Or have different end, the segment is already rupted.
*/
If (end <QP-> Len |
(QP-> last_in & last_in) & End! = QP-> Len ))
Goto err;
QP-> last_in | = last_in;
QP-> Len = end;
} Else {
// Check whether the data length is 8 bytes aligned.
If (end & 7 ){
End & = ~ 7;
If (SKB-> ip_summed! = Checksum_unnecessary)
SKB-> ip_summed = checksum_none;
}
If (end> QP-> Len ){
// The length exceeds the length of the current record
/* Some bits beyond end-> upload uption .*/
If (QP-> last_in & last_in)
Goto err;
QP-> Len = end;
}
}
If (END = offset)
Goto err;
// Remove the IP header and retain only the data
If (pskb_pull (SKB, IHL) = NULL)
Goto err;
// Adjust the SKB package length to end-offset. The value is the actual valid data length in the SKB package.
If (pskb_trim (SKB, end-offset ))
Goto err;
/* Find out which fragments are in front and at the back of us
* In the chain of fragments so far. We must know where to put
* This fragment, right?
*/
// Determine the position of the current package in the complete package. The multipart package may not necessarily arrive at the destination in order, but may be in a messy order.
// Adjust the package order.
Prev = NULL;
For (next = QP-> fragments; next! = NULL; next = Next-> next ){
If (frag_cb (next)-> Offset> = offset)
Break;/* bingo! */
Prev = next;
}
/* We found where to put this one. Check for overlap
* Preceding fragment, and, if needed, align things so that
* Any overlaps are eliminated.
*/
// Check whether the offset overlaps. The overlap is allowed, as long as it is correct.
If (prev ){
Int I = (frag_cb (prev)-> Offset + Prev-> Len)-offset;
If (I> 0 ){
Offset + = I;
If (end <= offset)
Goto err;
If (! Pskb_pull (SKB, I ))
Goto err;
If (SKB-> ip_summed! = Checksum_unnecessary)
SKB-> ip_summed = checksum_none;
}
}
// If the packets overlap, the offset values of all packets after the queue must be adjusted, and the accumulated value of the packet length must be reduced accordingly.
While (next & frag_cb (next)-> offset <End ){
Int I = end-frag_cb (next)-> offset;/* overlap is 'I' bytes */
If (I <next-> Len ){
/* Eat head of the next overlapped Fragment
* And leave the loop. the next ones cannot overlap.
*/
If (! Pskb_pull (next, I ))
Goto err;
Frag_cb (next)-> Offset + = I;
QP-> meat-= I;
If (next-> ip_summed! = Checksum_unnecessary)
Next-> ip_summed = checksum_none;
Break;
} Else {
Struct sk_buff * free_it = next;
/* Old fragmnet is completely overridden
* New one drop it.
*/
Next = Next-> next;
If (prev)
Prev-> next = next;
Else
QP-> fragments = next;
QP-> meat-= free_it-> Len;
Frag_kfree_skb (free_it );
}
}
// SKB records its own Offset Value
Frag_cb (SKB)-> offset = offset;
// Insert the current SKB into the queue
/* Insert this fragment in the chain of fragments .*/
SKB-> next = next;
If (prev)
Prev-> next = SKB;
Else
QP-> fragments = SKB;
If (SKB-> Dev)
QP-> IIF = SKB-> Dev-> ifindex;
SKB-> Dev = NULL;
// Time update
QP-> stamp = SKB-> stamp;
// Accumulate the total length of the current data packet
QP-> meat + = SKB-> Len;
// Add the SKB size to the shard memory
Atomic_add (SKB-> truesize, & ip_frag_mem );
If (offset = 0)
QP-> last_in | = first_in;
Write_lock (& ipfrag_lock );
// Adjust the shard node location in the recently used queue. When the storage area exceeds the limit, the last unused Shard is released.
// Fragments
List_move_tail (& QP-> lru_list, & ipq_lru_list );
Write_unlock (& ipfrag_lock );
Return;
Err:
// Directly discards the data packet when an error occurs, but the existing data packet in the queue is not released. If the reorganization fails
// Release when the fragment memory limit is exceeded
Kfree_skb (SKB );
}
2.5 ip_frag_reasm () function
The ip_frag_reasm () function implements the final data reorganization process after all data is correctly received.
Static struct sk_buff * ip_frag_reasm (struct ipq * Qp, struct net_device * Dev)
{
Struct iphdr * IPH;
Struct sk_buff * FP, * head = QP-> fragments;
Int Len;
Int Ihlen;
// Disconnect the node from the linked list and delete the timer.
Ipq_kill (qP );
Bug_trap (Head! = NULL );
Bug_trap (frag_cb (head)-> offset = 0 );
/* Allocate a new buffer for the datax .*/
Ihlen = head-> NH. iph-> IHL * 4;
Len = Ihlen + QP-> Len;
// The total IP address length exceeds the limit and is discarded.
If (LEN> 65535)
Goto out_oversize;
/* Head of list must not be cloned .*/
If (skb_cloned (head) & pskb_expand_head (Head, 0, 0, gfp_atomic ))
Goto out_nomem;
/* If the first fragment is fragmented itself, we split
* It to two chunks: The first with data and paged part
* And the second, holding only fragments .*/
If (skb_shinfo (head)-> frag_list ){
// The first SKB in the queue cannot be sharded. If sharded, allocate another SKB and its data length is 0,
// The final result of the head is this SKB, which does not include data, but its end pointer, that is
// The frag_list In the struct skb_shared_info structure contains all the shards SKB, which is also SKB
// Is a form of representation, not necessarily a continuous data block, but the final result is through skb_linearize ()
// The function copies the data in these linked list nodes to a continuous data block.
Struct sk_buff * clone;
Int I, Plen = 0;
If (clone = alloc_skb (0, gfp_atomic) = NULL)
Goto out_nomem;
Clone-> next = head-> next;
Head-> next = clone;
Skb_shinfo (clone)-> frag_list = skb_shinfo (head)-> frag_list;
Skb_shinfo (head)-> frag_list = NULL;
For (I = 0; I <skb_shinfo (head)-> nr_frags; I ++)
Plen + = skb_shinfo (head)-> frags [I]. size;
Clone-> Len = clone-> data_len = head-> data_len-plen;
Head-> data_len-= clone-> Len;
Head-> len-= clone-> Len;
Clone-> csum = 0;
Clone-> ip_summed = head-> ip_summed;
Atomic_add (Clone-> truesize, & ip_frag_mem );
}
Skb_shinfo (head)-> frag_list = head-> next;
Skb_push (Head, head-> data-head-> NH. Raw );
Atomic_sub (Head-> truesize, & ip_frag_mem );
// Accumulate the Data Length of all subsequent packages in sequence, and delete the data length from the allocated memory count.
For (FP = head-> next; FP = FP-> next ){
Head-> data_len + = FP-> Len;
Head-> Len + = FP-> Len;
If (Head-> ip_summed! = FP-> ip_summed)
Head-> ip_summed = checksum_none;
Else if (Head-> ip_summed = checksum_hw)
Head-> csum = csum_add (Head-> csum, FP-> csum );
Head-> truesize + = FP-> truesize;
Atomic_sub (FP-> truesize, & ip_frag_mem );
}
Head-> next = NULL;
Head-> Dev = dev;
Head-> stamp = QP-> stamp;
// Reset the length and offset mark in the IP Header
IPH = head-> NH. iph;
IPH-> frag_off = 0;
IPH-> tot_len = htons (LEN );
Ip_inc_stats_bh (ipreasmoks );
// SKB of each shard has been processed and will not be released again when QP is released
QP-> fragments = NULL;
Return head;
Out_nomem:
Netdebug (if (net_ratelimit ())
Printk (kern_err
"IP: queue_glue: no memory for gluing queue % P/N ",
QP ));
Goto out_fail;
Out_oversize:
If (net_ratelimit ())
Printk (kern_info
"Oversized IP packet from % d. % d./N ",
Nipquad (QP-> saddr ));
Out_fail:
Ip_inc_stats_bh (ipreasmfails );
Return NULL;
}
2.6 release of ipq
After the reorganization, the shard queue will be released:
Static _ inline _ void ipq_put (struct ipq * ipq)
{
If (atomic_dec_and_test (& ipq-> refcnt ))
Ip_frag_destroy (ipq );
}
/* Complete destruction of ipq .*/
Static void ip_frag_destroy (struct ipq * QP)
{
Struct sk_buff * FP;
Bug_trap (QP-> last_in & complete );
Bug_trap (del_timer (& QP-> timer) = 0 );
/* Release all fragment data .*/
Fp = QP-> fragments;
While (FP ){
Struct sk_buff * XP = FP-> next;
// Release each shard SKB
Frag_kfree_skb (FP );
Fp = XP;
}
/* Finally, release the queue descriptor itself .*/
// Release the fragment Node itself
Frag_free_queue (qP );
}
3. Conclusion
In Linux, multiple possible exceptions are taken into account in the IP Fragment reorganization process, which provides high security. Therefore, data packet restructuring before data packets enter the netfilter architecture can defend against various fragment attacks.
Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | __label__pos | 0.854632 |
How to calculate correlation in PySpark
This recipe helps you calculate correlation in PySpark
Recipe Objective: How to Calculate correlation in PySpark?
In this recipe, we learn how the correlation between two columns of a dataframe can be calculated. In a general sense, correlation measures the strength of a linear relationship between two quantitative variables. It provides quantitative measurements of the statistical dependence between two random variables. A positive correlation would mean that there is a tendency that as one variable increases, the other increases as well and vice versa. A negative number would mean that as one variable increases, the other variable tends to decrease. An example of positive correlation is your height and weight; negative correlation is the price and demand of a commodity are negatively correlated. When the price increases, the demand generally goes down.
Deploy an Auto Twitter Handle with Spark and Kafka
Prerequisites:
Before proceeding with the recipe, make sure the following installations are done on your local EC2 instance.
Steps to set up an environment:
• In the AWS, create an EC2 instance and log in to Cloudera Manager with your public IP mentioned in the EC2 instance. Login to putty/terminal and check if PySpark is installed. If not installed, please find the links provided above for installations.
• Type "<your public IP>:7180" in the web browser and log in to Cloudera Manager, where you can check if Hadoop, Hive, and Spark are installed.
• If they are not visible in the Cloudera cluster, you may add them by clicking on the "Add Services" in the cluster to add the required services in your local instance.
Calculating correlation using PySpark:
Setup the environment variables for Pyspark, Java, Spark, and python library. As shown below:
bigdata_1
Please note that these paths may vary in one's EC2 instance. Provide the full path where these are stored in your instance.
Import the Spark session and initialize it. You can name your application and master program at this step. We provide appName as "demo," and the master program is set as "local" in this recipe.
bigdata_2
We demonstrated this recipe using a CSV file "IRIS.csv," which is present in the HDFS.
The CSV file is first read and loaded to create a dataframe, and this dataframe is examined to know its schema (using printSchema() method) and to check the data present in it(using show()).
bigdata_3
The dataFrame.stat.corr() function is used to calculate the correlation. The columns between which the correlation is to be calculated are passed as arguments to this method. Let us first calculate the correlation between "sepal_length" and "petal_length." And then between "sepal_width" and "petal_width".
bigdata_4
We can see from the above results that the correlation between sepal_length and petal_length is a positive value which means they are positively related. And the sepal_width and petal_width are negatively correlated, which can be observed from the negative correlation value.
This is how the correlation between two columns of a dataframe can be calculated using PySpark.
What Users are saying..
profile image
Anand Kumpatla
Sr Data Scientist @ Doubleslash Software Solutions Pvt Ltd
linkedin profile url
ProjectPro is a unique platform and helps many people in the industry to solve real-life problems with a step-by-step walkthrough of projects. A platform with some fantastic resources to gain... Read More
Relevant Projects
COVID-19 Data Analysis Project using Python and AWS Stack
COVID-19 Data Analysis Project using Python and AWS to build an automated data pipeline that processes COVID-19 data from Johns Hopkins University and generates interactive dashboards to provide insights into the pandemic for public health officials, researchers, and the general public.
Real-Time Streaming of Twitter Sentiments AWS EC2 NiFi
Learn to perform 1) Twitter Sentiment Analysis using Spark Streaming, NiFi and Kafka, and 2) Build an Interactive Data Visualization for the analysis using Python Plotly.
Getting Started with Azure Purview for Data Governance
In this Microsoft Azure Purview Project, you will learn how to consume the ingested data and perform analysis to find insights.
SQL Project for Data Analysis using Oracle Database-Part 2
In this SQL Project for Data Analysis, you will learn to efficiently analyse data using JOINS and various other operations accessible through SQL in Oracle Database.
How to deal with slowly changing dimensions using snowflake?
Implement Slowly Changing Dimensions using Snowflake Method - Build Type 1 and Type 2 SCD in Snowflake using the Stream and Task Functionalities
Learn to Create Delta Live Tables in Azure Databricks
In this Microsoft Azure Project, you will learn how to create delta live tables in Azure Databricks.
SQL Project for Data Analysis using Oracle Database-Part 5
In this SQL Project for Data Analysis, you will learn to analyse data using various SQL functions like ROW_NUMBER, RANK, DENSE_RANK, SUBSTR, INSTR, COALESCE and NVL.
Databricks Real-Time Streaming with Event Hubs and Snowflake
In this Azure Databricks Project, you will learn to use Azure Databricks, Event Hubs, and Snowflake to process and analyze real-time data, specifically in monitoring IoT devices.
Airline Dataset Analysis using PySpark GraphFrames in Python
In this PySpark project, you will perform airline dataset analysis using graphframes in Python to find structural motifs, the shortest route between cities, and rank airports with PageRank.
Log Analytics Project with Spark Streaming and Kafka
In this spark project, you will use the real-world production logs from NASA Kennedy Space Center WWW server in Florida to perform scalable log analytics with Apache Spark, Python, and Kafka. | __label__pos | 0.594192 |
Exploring the Android Experience on Windows RT
Share This:
In the ever-evolving world of technology, the convergence of different operating systems has become a common trend. One such combination is Android on Windows RT, which brings together the best of both worlds. This powerful combination offers users a unique and versatile experience that combines the familiarity of Windows with the vast ecosystem of Android apps.
Windows RT, initially introduced by Microsoft as a lightweight version of its traditional Windows OS, was designed specifically for ARM-based devices such as tablets. Its main advantage lies in its ability to run on low-power processors, ensuring longer battery life and improved performance. Android, on the other hand, is an open-source operating system developed by Google, renowned for its extensive range of apps and customization options.
The integration of Android on Windows RT devices opens up a whole new world of possibilities. Users can now enjoy the benefits of a full-fledged Windows operating system while simultaneously accessing the vast library of Android apps. This means access to popular apps such as Instagram, Snapchat, and many others that were previously unavailable on Windows RT.
The combination of Android on Windows RT also brings forth a seamless user experience. Users can effortlessly switch between the familiar Windows interface and the Android environment, allowing for a smooth transition between productivity tasks and leisure activities. This integration ultimately enhances the overall usability of Windows RT devices, making them more versatile and adaptable to different user needs.
Moreover, the integration of Android on Windows RT devices opens up new opportunities for developers. They can now tap into the massive Android app market and create innovative applications that cater to Windows RT users. This not only expands the app ecosystem for Windows RT but also encourages developers to bring their creativity to a wider audience.
While the combination of Android on Windows RT offers numerous benefits, it is important to note that there may be some limitations. Windows RT, being a different architecture from traditional Windows, may not be able to support all Android apps seamlessly. Some apps may require additional optimization or may not be compatible at all. However, the vast majority of popular Android apps are expected to work seamlessly on Windows RT devices.
The integration of Android on Windows RT devices offers a powerful combination that combines the strengths of both operating systems. It provides users with a versatile and seamless experience, giving them access to a wide range of Android apps alongside the familiar Windows interface. This integration not only enhances the usability of Windows RT devices but also opens up new opportunities for developers. With Android on Windows RT, the possibilities are endless, and the future looks promising for this powerful combination.
Exploring the Android Experience on Windows RT 1
What Platform is Windows RT Intended For?
Windows RT is specifically designed for devices that use the 32-bit ARM architecture, specifically ARMv7. It is a mobile operating system developed by Microsoft and is a variant of Windows 8 or Windows 8.1.
Here are some key points about the platform for Windows RT:
– Windows RT is optimized for devices that are powered by ARM-based processors. These processors are commonly found in smartphones, tablets, and other mobile devices.
– Unlike the regular version of Windows 8 or Windows 8.1, which runs on x86 or x64 processors, Windows RT is specifically tailored for ARM architecture. This enables better power efficiency and performance on ARM-based devices.
– Windows RT includes a touch-centric user interface, making it ideal for touch-enabled devices like tablets. It provides a seamless and intuitive user experience, with features and apps that are designed for touch interaction.
– Windows RT comes pre-installed on devices, and users cannot install it on their own or upgrade from other versions of Windows. It is typically found on devices from manufacturers like Microsoft, ASUS, Dell, Lenovo, and others.
– Windows RT includes a limited version of the Windows desktop environment, which allows users to run Office applications (Word, Excel, PowerPoint, and OneNote) and some other desktop apps. However, it does not support running traditional Windows applications designed for x86 or x64 processors.
Windows RT is intended for devices that use ARM-based processors, such as tablets and other mobile devices. It offers a touch-centric interface, optimized performance, and limited desktop functionality.
What is the Difference Between Windows and Windows RT?
Windows and Windows RT are both operating systems developed by Microsoft, but they have some key differences. Here are the main distinctions between the two:
1. Compatibility: One of the major differences between Windows and Windows RT lies in their compatibility with software. Windows is a full-fledged operating system that is compatible with a wide range of software applications. It is backward compatible, meaning it can run most software that was developed for earlier versions of Windows. On the other hand, Windows RT is a more limited operating system designed specifically for tablets. It does not allow users to install traditional desktop software written for Windows PCs. Instead, it can only run apps from the Windows Store.
2. Processor Architecture: Another difference is the processor architecture they support. Windows is designed to run on devices with x86 or x64 processors, which are commonly found in PCs and laptops. This compatibility allows Windows to support a vast array of hardware configurations. In contrast, Windows RT is designed to run on devices with ARM-based processors, which are typically used in smartphones and tablets. This limitation means that Windows RT is only compatible with a specific set of hardware devices.
3. User Interface: While the overall user interface of Windows and Windows RT may appear similar, there are some differences in the specific features and functionality. Windows RT includes a touch-optimized version of the traditional Windows desktop, similar to what you would find on a PC. However, it lacks some of the more advanced features and customization options found in the full Windows version. Additionally, Windows RT emphasizes the use of the modern Windows interface, with touch-friendly Live Tiles and full-screen apps.
Windows is a comprehensive operating system that offers full compatibility with various software applications and runs on a wide range of hardware devices. Windows RT, on the other hand, is a more limited operating system specifically designed for tablets, running on devices with ARM-based processors and only supporting apps from the Windows Store.
What Windows Does Surface RT Have?
Surface RT comes with Windows RT, which is a version of Windows specifically designed for devices with ARM processors. Windows RT has a similar interface to other versions of Windows, but it is not as feature-rich as Windows 8 or Windows 10.
Here are some key features of Windows RT on Surface RT:
1. Start Screen: Like other versions of Windows, Surface RT has a Start Screen that displays tiles for apps and live updates. You can customize the layout and size of the tiles to suit your preferences.
2. Desktop Mode: Surface RT includes a desktop mode that resembles the traditional Windows desktop interface. However, it has some limitations compared to the full Windows desktop experience found on other devices. For example, you can only install apps from the Windows Store on Surface RT, and traditional desktop applications designed for x86 processors will not work.
3. Office RT: Surface RT comes pre-installed with Office RT, which includes Word, Excel, PowerPoint, and OneNote. These apps are optimized for touch and offer a similar experience to the full versions of Office on other devices.
4. Internet Explorer: Surface RT includes Internet Explorer for web browsing. However, it is limited to the desktop version of Internet Explorer and does not support other browsers like Chrome or Firefox.
5. Windows Store: Surface RT can only install apps from the Windows Store. It does not support installing traditional desktop applications like Photoshop or iTunes.
6. Limited compatibility: Windows RT is not compatible with all Windows software. It can only run apps specifically designed for the ARM architecture, which limits the available software compared to other versions of Windows.
It’s important to note that Windows RT is a discontinued operating system, and Microsoft has ended support for it. Surface RT and Surface 2 devices can no longer receive Windows updates beyond Windows RT 8.1 Update 3.
Conclusion
Android on Windows RT is not officially supported by Microsoft and therefore not available as a native operating system option for Windows RT devices. Windows RT is specifically designed to run on ARM-based processors and is limited to running Windows Store apps only. This means that users cannot install or run Android apps or use the Android operating system on their Windows RT devices. While there have been attempts by third-party developers to port Android to Windows RT, these projects are not officially supported and may not provide a stable or reliable experience. It is important for users to understand the limitations and compatibility issues when considering Android on Windows RT.
Share This:
Photo of author
Sanjeev Singh
Sanjeev is the tech editor at DeviceMAG. He has a keen interest in all things technology, and loves to write about the latest developments in the industry. He has a passion for quality-focused journalism and believes in using technology to make people's lives better. He has worked in the tech industry for over 15 years, and has written for some of the biggest tech blogs in the world. Sanjeev is also an avid photographer and loves spending time with his family. | __label__pos | 0.690905 |
1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Understanding Authentication
Discussion in 'Ethical hacking Tips' started by SpOonWiZaRd, Feb 14, 2008.
1. SpOonWiZaRd
SpOonWiZaRd Know what you can do.
Joined:
May 30, 2007
Messages:
747
Likes Received:
8
Trophy Points:
0
Occupation:
Network Engineer/Programmer
Location:
South Africa
Authentication proves that a user or system is actually who they say they are. This is one of the most critical parts of a security system. It's part of a process that is also reffered to as identification and authentication (I&A). The identification process start when a user ID or logon name is typed into a sign-on screen. Authentication is accomplished by challenging the cliam about who is accessing the resource. Without authentication, anybody can claim to be anybody.
Authentication systems are based on one or more of these three factors:
• Something you know, such as a password
• Something you have, such as a smartcard or identification device.
• Something physically unique to you, such as your fingerprints or retinal pattern.
Systems authenticate eachother using similiar methods. Frequently, systems pass private information between eachother to establish identity. Once authentication has occured, the two systems can communicate in the manner specified in the design.
Several common methods are used for authentication. Each has advantages and disadvantages that must be considered when you are evaluating authentication scemes or methods, thus I have gone over some briefly:
Username/Password - A username and password are unique identifiers for a logon process. When you sit down in front of a computer the first thing you must do is establish who you are. Identification is typically confirmed through a logon process. Most operating systems use a user ID and password to accomplish this. These values can be sent over the network as plain text or can be encrypted. The logon process identifies to the operating system, and possibly the network, that you are who you say you are.The operating system compares this information to the stored information of the security processor and either accepts or denies the logon attempt.
PAP (Password Athentication Protocol) - It offers no true security, but it's one of the simplest forms of authentication. The username and password values are both sent to the server as clear text and checked for a match. If they match, the user is granted access; if they dont match, the user is denied access.
CHAP (Challenge Handshake Authentication Protocol) - It challenges a system to verify identity. CHAP doesnt use userID/Password mechanism. Instead, the initiator sends a logon request from the client to the server. The server sends a challenge back to the client. The challenge is encrypted and then sent back to the server. The server compares the value from the client and, if the information matches, grants authorization. If the response fails, the session fails, and the request phase starts over.
Certificates - This is another common form of authentication. A server or certificate authority (CA) can issue a certificate that will be accepted by the challenging system. Certificates can either be physical access devices, such as smart cards, or electronic certificates that are used as part of the logon process. A certificate practice statement (CPS) outlines the rules used for issuing and managing cerftificates. A certificate revocation list (CRL) lists the revocations that must be addressed (often due to expiration) in order to stay current. A simple way to think of certificates is like hall passes at school.
Security Tokens - These are similair to certificates. They contain the rights and access privileges of the token bearer as part of the token. Think of a token as a small piece of datathat holds a sliver of information about the user. Many operating systems generate a tokenthat is applied to every action taken on the computer system. If your token doent grant you access to certain information, then either that information wont be displayed or your access will be denied. The authentication system creates a token everytime a user connects or a session begins. At the completion of a session, the token is destroyed.
Kerberos - It i a authentication protocol named after the mythical three-headed dog that stood at the gates of Hades. Originally designed by MIT, Kerberos is becoming very popular as an authentication method. It allows for single sign-on to a distributed network. Kerberos authentication uses key distribution center (KDC) to orchestrate the process. The KDC authenticates the principle (which can be a user, a program, or a system) and provides it with a ticket. Once this ticket is issued, it can be used to authenticate against other principles. Kerberos is quickly becoming a common standard in network environments. Its only significant weakness is that the KDC can be a single point of failure. If the KDC goes down, the authentication process will stop.
Multifactor Authentication - When two or more access methods are included as part of the authentication process, your implementing a multi-factor system. A system that uses smartcards and passwords is reffered to as a two-factor system.
Smart Cards - This is a type of badge or card that gives you access to resources, including buildings, parking lots, and computers. It contains information about your identity and access privileges. Each area or computer has a card scanner or reader in which you insert your card. The reader is connected to the workstation and validates against the security system.
Biometrics - This uses physical characteristics to identify the user. Such devices are becoming more common in the business environment. Biometric systems include hand scanners, retinal scanners, and soon, possibly, DNA scanners. To gain access to resources you must pass a physical screening process.
These are just the basics of what each of these above mentioned authentication methods do, there are going to be more and stronger methods, but there will also be a loophole in every method. As hackers we must know how these methods work so that we can find out what make them tick and then exploit them.
2. shabbir
shabbir Administrator Staff Member
Joined:
Jul 12, 2004
Messages:
15,305
Likes Received:
368
Trophy Points:
83
Share This Page | __label__pos | 0.955442 |
Documentation
性能分析器(Profiler)
层次式性能分析器(Hierarchical Profiler)
Phalcon\Xhprof 在数据收集阶段,它跟踪调用次数与测量数据,例如运行经过的时间、CPU 计算时间和内存开销:
<?php
function print_canonical($xhprof_data)
{
if (!is_array($xhprof_data)) {
throw new \UnexpectedValueException("print_canonical expects an array, but %s given.", gettype($xhprof_data));
}
ksort($xhprof_data);
foreach($xhprof_data as $func => $metrics) {
echo str_pad($func, 40) . ":";
ksort($metrics);
foreach ($metrics as $name => $value) {
$value = str_pad($value, 4, " ", STR_PAD_LEFT);
echo " {$name}={$value};";
}
echo "\n";
}
}
function bar() {
return 1;
}
function foo($x) {
$sum = 0;
for ($idx = 0; $idx < 2; $idx++) {
$sum += bar();
}
return strlen("hello: {$x}");
}
Phalcon\Xhprof::enable(Phalcon\Xhprof::FLAG_MEMORY | Phalcon\Xhprof::FLAG_CPU);
foo("this is a test");
$output = Phalcon\Xhprof::disable();
print_canonical($output);
手动性能分析器(Manual Profiler)
Phalcon\Profiler 一个性能分析组件,可以实现多层嵌套:
<?php
$profiler = new Phalcon\Profiler();
$profiler->startProfile('one');
$profile = $profiler->getCurrentProfile(); // 'one'
$profile = $profiler->getLastProfile(); // 'one'
$profiler->startProfile('two');
$profile = $profiler->getCurrentProfile(); // 'two'
$profile = $profiler->getLastProfile(); // 'two'
$profiler->stopProfile('two');
$num = count($profiler->getProfiles()); // 1
$profile = $profiler->getCurrentProfile(); // 'one'
$profile = $profiler->getLastProfile(); // 'two'
$profiler->stopProfile('one');
$num = count($profiler->getProfiles()); // 2
DB 性能分析器(DB Profiler)
Phalcon\Db\Profiler 组件,它被用于分析数据库的操作性能以便诊断性能问题,并发现瓶颈:
<?php
use Phalcon\DB\Profiler as DbProfiler;
$profiler = new DbProfiler();
// 设置性能分析组件
$connection->setProfile($profiler);
$sql = "SELECT buyer_name, quantity, product_name "
. "FROM buyers "
. "LEFT JOIN products ON buyers.pid = products.id";
// 执行SQL
$connection->query($sql);
// 获取最后一个分析结果
$profile = $profiler->getLastProfile();
echo "SQL Statement: ", $profile->getSQLStatement(), "\n";
echo "Start Time: ", $profile->getInitialTime(), "\n";
echo "Final Time: ", $profile->getFinalTime(), "\n";
echo "Total Elapsed Time: ", $profile->getTotalElapsedSeconds(), "\n";
echo "Total Usage Memory: ", $profile->getTotalUsageMemory(), "\n";
或者通过自己监听事件来实现性能分析:
<?php
use Phalcon\Events\Manager as EventsManager;
use Phalcon\Db\Profiler as DbProfiler;
$eventsManager = new EventsManager();
$profiler = new DbProfiler();
// 监听所有数据库的事件
$eventsManager->attach('db', function ($event, $connection) use ($profiler) {
if ($event->getType() == 'beforeQuery') {
// 操作前启动分析
$profiler->startProfile('db', ['sqlStatement' => $connection->getSQLStatement()]);
}
if ($event->getType() == 'afterQuery') {
// 操作后停止分析
$profiler->stopProfile();
}
});
// 设置事件管理器
$connection->setEventsManager($eventsManager);
$sql = "SELECT buyer_name, quantity, product_name "
. "FROM buyers "
. "LEFT JOIN products ON buyers.pid = products.id";
// 执行SQL
$connection->query($sql);
// 获取最后一个分析结果
$profile = $profiler->getLastProfile();
echo "SQL Statement: ", $profile->getSQLStatement(), "\n";
echo "Start Time: ", $profile->getInitialTime(), "\n";
echo "Final Time: ", $profile->getFinalTime(), "\n";
echo "Total Elapsed Time: ", $profile->getTotalElapsedSeconds(), "\n";
echo "Total Usage Memory: ", $profile->getTotalUsageMemory(), "\n";
你也可以基于 Phalcon\ProfilerPhalcon\Db\Profiler 建立你自己的分析器类。 | __label__pos | 0.801763 |
I have an assignment due in 2 days and I've hit a bit of a wall. I have a client program which receives parts of a file from multiple servers and combines them into the original file. The client can receive the files without a problem but wont combine them as when fopen is called it seems to return an empty file.
int combineFiles(char * filename, int parts){
FILE *writeFile, *readFile;
char partName[256];
int i, readAmount;
char buffer[1024];
if((writeFile = fopen(filename, "wb")) == NULL){
return -1;
}
for(i = 1; i <= parts; i++){
sprintf(partName, "%s.part.%d", filename, i);
if((readFile = fopen(partName, "rb")) == NULL){
fclose(writeFile);
return -1;
}
do{
if((readAmount = fread(buffer, sizeof(char), PACKETSIZE, readFile)) < 0){
fclose(writeFile);
fclose(readFile);
return -1;
}
fwrite(buffer, sizeof(char), readAmount, writeFile);
}while(readAmount > 0);
fclose(readFile);
}
fclose(writeFile);
return 0;
}
Because the program thinks the files are empty it won't write to the new one. I've checked the files and they all exist and are not empty. fgets on the file results in a blank string. I have no idea as to what is going on. Any ideas?
Turns out when I downloaded the files from the server I forgot to close them, so calling fopen again was a bit redundant. It also meant I was starting to read at the end hence the seemingly false eof. | __label__pos | 0.989709 |
DEV Community
Joy Lee 🌻
Joy Lee 🌻
Posted on
Husky and lint-staged: Keeping Code Consistent
When a team works on a software project together, it's important for everyone's code to be neat and easy to understand. But sometimes, different computers and ways of working can make the code messy. Tools like husky and lint-staged can help fix this problem by checking the code automatically before it's added to the project.
What is lint-staged?
lint-staged is a tool that checks your code for mistakes and fixes them when it's staged in git. By using lint-staged, it helps keep your code clean and consistent.
Installation
1 . Install lint-staged as a development dependency:
npm install --save-dev lint-staged
Enter fullscreen mode Exit fullscreen mode
2 . Configure lint-staged in your package.json file to run eslint and prettier on js and ts files.
"lint-staged": {
"*.{js,jsx,ts,tsx}": [
"eslint --fix --max-warnings=0", // both errors and warnings must be fixed
// "eslint --fix" // errors must be fixed but warnings can be ignored
"prettier --write"
]
}
Enter fullscreen mode Exit fullscreen mode
3 . Run lint-staged on staged files using the following command:
npx lint-staged
Enter fullscreen mode Exit fullscreen mode
What is husky?
husky is a tool that manages git hooks, automatically running scripts before each git commit. This setup ensures that lint-staged checks your code before it's committed. It helps you maintain code quality by catching issues before they're finalized.
Installation
1 . Install husky and initialize it:
# husky init (create .husky folder)
npx husky-init && npm install
# husky - Git hooks install
npx husky install
Enter fullscreen mode Exit fullscreen mode
2 . Check if prepare command is added in your package.json
"scripts": {
"prepare": "husky install"
},
Enter fullscreen mode Exit fullscreen mode
3 . Edit .husky > pre-commit file with the following to run lint-staged before each commit
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
npx lint-staged
Enter fullscreen mode Exit fullscreen mode
How It Works
1. Stage your code changes.
2. husky triggers the pre-commit hook.
3. The pre-commit hook executes lint-staged.
4. lint-staged runs eslint and prettier checks on staged files.
5. If errors or warnings are found, the commit is prevented with an error message.
Top comments (0) | __label__pos | 0.976442 |
-- -------------------------------------------------------------------------------- -- File Name : functions_x_user.sql -- -------------------------------------------------------------------------------- -- Author : Danilo Vizzarro (http://www.danilovizzarro.it) -- Download : http://www.danilovizzarro.it/scripts/ -- Date : 27-MAR-2009 -- Version : 1.0 -- -------------------------------------------------------------------------------- -- Usage : This script return the list of functions the -- USERNAME_TO_BE_CHECKED can acceed using a -- CURRENT_RESPONSIBILITY considering function and menu exclusions -- The following variables should be replaced -- USERNAME_TO_BE_CHECKED -- CURRENT_RESPONSIBILITY -- -------------------------------------------------------------------------------- -- License : http://creativecommons.org/licenses/by-nc/3.0/ -- You are free: -- -> to Share — to copy, distribute and transmit the work -- -> to Remix — to adapt the work -- Under the following conditions: -- -> Attribution. You must attribute the work in the manner -- specified by the author or licensor (but not in any way that -- suggests that they endorse you or your use of the work). -- -> Noncommercial. You may not use this work for commercial -- purposes. -- -------------------------------------------------------------------------------- SELECT FU.USER_NAME, FRTL.RESPONSIBILITY_NAME, FFL.USER_FUNCTION_NAME, FFF.FUNCTION_NAME FROM FND_USER FU, FND_USER_RESP_GROUPS FURG, FND_RESPONSIBILITY FR, FND_COMPILED_MENU_FUNCTIONS FCMF, FND_FORM_FUNCTIONS FFF, FND_RESPONSIBILITY_TL FRTL, FND_FORM_FUNCTIONS_TL FFL WHERE FURG.RESPONSIBILITY_ID = FR.RESPONSIBILITY_ID AND FURG.RESPONSIBILITY_APPLICATION_ID = FR.APPLICATION_ID AND FR.MENU_ID = FCMF.MENU_ID AND FCMF.GRANT_FLAG = 'Y' AND FCMF.FUNCTION_ID = FFF.FUNCTION_ID AND FURG.USER_ID = FU.USER_ID AND SYSDATE BETWEEN FU.START_DATE AND NVL(FU.END_DATE, SYSDATE+1) AND SYSDATE BETWEEN FR.START_DATE AND NVL(FR.END_DATE, SYSDATE+1) AND FURG.RESPONSIBILITY_ID = FRTL.RESPONSIBILITY_ID AND FR.RESPONSIBILITY_ID = FRTL.RESPONSIBILITY_ID AND FRTL.LANGUAGE = 'US' AND FFL.LANGUAGE = 'US' AND FFF.FUNCTION_ID = FFL.FUNCTION_ID AND (FURG.END_DATE > SYSDATE OR FURG.END_DATE IS NULL) AND FU.USER_NAME = 'USERNAME_TO_BE_CHECKED' AND FRTL.RESPONSIBILITY_NAME = 'CURRENT_RESPONSIBILITY' AND FFF.FUNCTION_NAME NOT IN ( SELECT FF.FUNCTION_NAME FROM FND_RESPONSIBILITY R, FND_USER_RESP_GROUPS RG, FND_USER U, FND_RESP_FUNCTIONS RF, FND_FORM_FUNCTIONS FF, FND_RESPONSIBILITY_TL FRTL WHERE RG.RESPONSIBILITY_ID = R.RESPONSIBILITY_ID AND U.USER_ID = RG.USER_ID AND RF.RESPONSIBILITY_ID = R.RESPONSIBILITY_ID AND RF.RULE_TYPE = 'F' AND FF.FUNCTION_ID = RF.ACTION_ID AND FRTL.RESPONSIBILITY_ID = R.RESPONSIBILITY_ID AND FRTL.RESPONSIBILITY_ID = RG.RESPONSIBILITY_ID AND FRTL.LANGUAGE = 'US' AND U.USER_NAME = UPPER('USERNAME_TO_BE_CHECKED') AND FRTL.RESPONSIBILITY_NAME = 'CURRENT_RESPONSIBILITY' ) AND FFF.FUNCTION_NAME NOT IN ( SELECT FUNCTION_NAME FROM ( SELECT DISTINCT ( SELECT FUNCTION_NAME FROM FND_FORM_FUNCTIONS F WHERE F.FUNCTION_ID = ME.FUNCTION_ID ) FUNCTION_NAME, MENU_ID FROM FND_MENU_ENTRIES ME START WITH ME.MENU_ID IN ( SELECT RF.ACTION_ID FROM FND_RESPONSIBILITY R, FND_USER_RESP_GROUPS RG, FND_USER U, FND_RESP_FUNCTIONS RF, FND_RESPONSIBILITY_TL FRTL WHERE RG.RESPONSIBILITY_ID = R.RESPONSIBILITY_ID AND U.USER_ID = RG.USER_ID AND RF.RESPONSIBILITY_ID = R.RESPONSIBILITY_ID AND RF.RULE_TYPE = 'M' AND FRTL.RESPONSIBILITY_ID = R.RESPONSIBILITY_ID AND FRTL.RESPONSIBILITY_ID = RF.RESPONSIBILITY_ID AND U.USER_NAME = UPPER('USERNAME_TO_BE_CHECKED') AND FRTL.RESPONSIBILITY_NAME = 'CURRENT_RESPONSIBILITY' ) CONNECT BY ME.MENU_ID = PRIOR ME.SUB_MENU_ID ) WHERE FUNCTION_NAME IS NOT NULL ) ORDER BY 1,2,3 / | __label__pos | 0.872621 |
A-A+
轻松解决你的Magento毛病:整理最全的 Magento 常用SQL命令
2012年09月22日 灵犀一指 轻松解决你的Magento毛病:整理最全的 Magento 常用SQL命令已关闭评论 阅读 4,022 人 次
Magento SQL命令可以加快解决你遇到的问题,不同版本,数据库可能会有所不同,所以SQL命令可能也会有所变化,在用SQL命令的时候一定要记得备份!!
1. 批量调整所有产品的价格 ( 参考命令在 1.3 下通过 )
UPDATE `catalog_product_entity_decimal` SET value=round(value*1.45) WHERE attribute_id=99;
执行完后,需要到缓存管理里刷新:Layered Navigation Indices ,即可同步数据库里的关联表。
2. 批量处理所有 exclude 状态的图片
UPDATE `catalog_product_entity_media_gallery_value` SET disabled=0 WHERE disabled=1;
3. 导出导入 Magento 所有分类和产品
分类和产品是存放在以 catalog 开头的所有表中,对这组表进行导出导入即可实现此功能。
导入分类产品的 SQL 文件前注意:
在首行加入:SET FOREIGN_KEY_CHECKS=0;
在末行加入:SET FOREIGN_KEY_CHECKS=1;
原因是 Magento 使用 Innodb 存储引擎。
4. 批量修改分类的 Display Settings ——> Is Anchor 值为 No
UPDATE `catalog_category_entity_int` set value=0 WHERE value=1 AND attribute_id=120;
5. 清空邮件队列
TRUNCATE TABLE `newsletter_queue`;
TRUNCATE TABLE `newsletter_queue_link`;
TRUNCATE TABLE `newsletter_queue_store_link`;
注意:同时向上万顾客发邮件时,不要在后台查看邮件队列,不然服务器压力会很大,待邮件发完之后,
记着清空邮件队列,这样在后台进入邮件队列就不会大量消耗服务器资源了。
6. Magento 转移站后,经常会出现下面这个提示,运行一下下面的SQL命令即可恢复正常。
错误提示: Notice: Undefined index: 0 app/code/core/Mage/Core/Model/Mysql4/Config.php on
line 92
SET FOREIGN_KEY_CHECKS=0;
update `core_store` set store_id = 0 where code='admin';
update `core_store_group` set group_id = 0 where name='Default';
update `core_website` set website_id = 0 where code='admin';
update `customer_group` set customer_group_id = 0 where customer_group_code='NOT LOGGED IN';
SET FOREIGN_KEY_CHECKS=1;
不过要明白,这个错误是使用了第三方数据库备份工具导致的,Magento 自带的备份功能是不会出现这个
错误的。
7. 根据产品的 SKU 批量将产品自定义选项设为非必填:
UPDATE `catalog_product_option` SET is_require=0 WHERE product_id IN (SELECT entity_id FROM
`catalog_product_entity` WHERE sku LIKE 'SKU %');
8. 关闭/开启 所有缺货产品
SET FOREIGN_KEY_CHECKS=0;
UPDATE `catalog_product_entity_int` SET value=2 WHERE attribute_id=80 and entity_id IN
(SELECT product_id FROM `cataloginventory_stock_status` WHERE stock_status=0);
SET FOREIGN_KEY_CHECKS=1;
其它说明:value=2 为关闭,1为开启,attribute_id 对应不同版本的产品禁用项,最后执行完命令需要
重建分类索引。
9. 取消所有问题邮件订阅
UPDATE `newsletter_subscriber` SET subscriber_status=3 WHERE subscriber_id IN (SELECT
subscriber_id FROM `newsletter_problem`);
10. 清除产品与分类的描述与 Meta
重置所有产品short description
UPDATE `catalog_product_entity_text` SET value='Short Description' WHERE
attribute_id=506;
清除所有产品Meta
UPDATE `catalog_product_entity_text` SET value='' WHERE attribute_id=97 OR
attribute_id=104;
UPDATE `catalog_product_entity_varchar` SET value='' WHERE attribute_id=103 OR
attribute_id=105;
清除所有产品URL
UPDATE `catalog_product_entity_varchar` SET value='' WHERE attribute_id=481;
清除所有分类描述:
UPDATE `catalog_category_entity_text` SET value='' WHERE attribute_id=112 OR
attribute_id=115 OR attribute_id=116;
清除所有分类URL
UPDATE `catalog_category_entity_varchar` SET value='' WHERE attribute_id=479;
11. 重置 Magento 所有 ID 统计数(如订单编码、发票编码等)
TRUNCATE `eav_entity_store`;
ALTER TABLE `eav_entity_store` AUTO_INCREMENT=1;
12. 批量禁用产品 —— 数据库操作
CREATE TABLE XYTMPTB SELECT entity_id,value FROM `catalog_product_entity_varchar` WHERE
value LIKE 'affliction%' AND attribute_id=96;
UPDATE `catalog_product_entity_int` SET value=1 WHERE attribute_id=273 AND entity_id IN
(SELECT entity_id FROM `XYTMPTB`);
DROP TABLE XYTMPTB;
别忘了重建索引!
13. 分类与产品的反向开关
UPDATE `catalog_category_entity_int` SET value=if(value=0,1,0) WHERE attribute_id=119;
UPDATE `catalog_product_entity_int` SET value=if(value=1,2,1) WHERE attribute_id=273;
运行一下,开的关了,关的开了,再运一下反之,最后别忘了重建索引!
14. 清站相关提示
能在后台清理的就在后台清理,直接对数据库操作有造成网站出错的可能性。其他辅助命令如下:
清除订单命令:
TRUNCATE `sales_flat_invoice`;
TRUNCATE `sales_flat_invoice_grid`;
TRUNCATE `sales_flat_invoice_item`;
TRUNCATE `sales_flat_order`;
TRUNCATE `sales_flat_order_address`;
TRUNCATE `sales_flat_order_grid`;
TRUNCATE `sales_flat_order_item`;
TRUNCATE `sales_flat_order_payment`;
TRUNCATE `sales_flat_order_status_history`;
TRUNCATE `sales_flat_quote`;
TRUNCATE `sales_flat_quote_address`;
TRUNCATE `sales_flat_quote_address_item`;
TRUNCATE `sales_flat_quote_item`;
TRUNCATE `sales_flat_quote_item_option`;
TRUNCATE `sales_flat_quote_payment`;
TRUNCATE `sales_flat_quote_shipping_rate`;
清除其它日志:
TRUNCATE `log_url_info`;
TRUNCATE `log_visitor_info`;
TRUNCATE `log_url`;
TRUNCATE `log_visitor`;
TRUNCATE `core_url_rewrite`;
TRUNCATE `report_event`;
TRUNCATE `report_viewed_product_index`;
15. Magento 数据库清理
安全模式:清理日常数据库的无用记录
TRUNCATE TABLE `log_visitor`;
TRUNCATE TABLE `log_visitor_info`;
TRUNCATE TABLE `log_url`;
TRUNCATE TABLE `log_url_info`;
干净模式:清理数据库的无用记录
TRUNCATE `log_visitor` ;
TRUNCATE `log_url_info` ;
TRUNCATE `log_visitor_info` ;
TRUNCATE `dataflow_batch_import` ;
TRUNCATE `log_url` ;
TRUNCATE `report_event` ;
TRUNCATE `log_visitor_online` ;
备注:如果是转移网站,URL 重写表 core_url_rewrite 也可清空,转完站重建 URL 即可。
16. 批量修改 SKU、Meta、Name 等字段里的部份词
UPDATE `catalog_product_entity` SET sku=replace(sku,'oldskuw','newskuw') WHERE sku LIKE
'%oldskuw%';
UPDATE `catalog_product_entity_text` SET value=replace(value,'oldmetaw','newmetaw')
WHERE value LIKE '%oldmetaw%';
UPDATE `catalog_product_entity_varchar` SET value=replace(value,'oldnamew','newnamew')
WHERE value LIKE '%oldnamew%';
17. 批量调整指定产品的价格
create table xytmptb SELECT entity_id,value FROM `catalog_product_entity_varchar` WHERE
(value LIKE '%Boot%' OR value LIKE '%Shoes%') AND attribute_id=60;
UPDATE `catalog_product_entity_decimal` SET value=value+10 WHERE entity_id IN (SELECT
entity_id FROM `xytmptb`) AND attribute_id=64;
drop table xytmptb;
最后别忘了重建价格索引!
评论已关闭! | __label__pos | 0.999889 |
Convert YAML to JSON
Form for YAML converting
This form allows you convert YAML to JSON data, paste or upload your YML file below:
Your result can be seen below.
Result of YAML conversion to JSON
Move to "Paste Code" for Save it
About YAML conversion to JSON
About YAML conversion to JSON
The Convert YAML to JSON was created for online converting YAML into appropriate JSON format. This tool converts YAML Ain’t Markup Language(YAML) documents to JSON (JavaScript Object Notation) documents and of course it's totally free converter. You do not need to download any tools for conversion.
How it Works?
Just paste your YAML to the textarea above and click to the button "Convert" and you will get JSON data in the next textarea.
Example of YAML conversion to JSON
Before:
# test yaml -----------------------------------------------#
namespace: common\tests
actor_suffix: Tester
paths:
tests: tests
output: tests/_output
data: tests/_data
support: tests/_support
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
modules:
config:
wtools:
configFile: 'config/test-local.php'
After:
{
"namespace": "common\tests",
"actor_suffix": "Tester",
"paths":
{
"tests": "tests",
"output": "tests/_output",
"data": "tests/_data",
"support": "tests/_support"
},
"settings":
{
"bootstrap": "_bootstrap.php",
"colors": true,
"memory_limit": "1024M"
},
"modules":
{
"config":
{
"wtools":
{
"configFile": "config/test-local.php"
}
}
}
}
After the conversion, you can apply the JSON to your project or use it for some other purpose.
Donate
Did you like this tool? You can donate to us. This will help us improve our free web tools.
Paypal | __label__pos | 0.977191 |
Qadir Hussain Qadir Hussain - 7 months ago 654
Swift Question
Converting Hex String to NSData in Swift
I got the code to convert String to HEX-String in objective-C.
- (NSString *) CreateDataWithHexString:(NSString*)inputString
{
NSUInteger inLength = [inputString length];
unichar *inCharacters = alloca(sizeof(unichar) * inLength);
[inputString getCharacters:inCharacters range:NSMakeRange(0, inLength)];
UInt8 *outBytes = malloc(sizeof(UInt8) * ((inLength / 2) + 1));
NSInteger i, o = 0;
UInt8 outByte = 0;
for (i = 0; i < inLength; i++) {
UInt8 c = inCharacters[i];
SInt8 value = -1;
if (c >= '0' && c <= '9') value = (c - '0');
else if (c >= 'A' && c <= 'F') value = 10 + (c - 'A');
else if (c >= 'a' && c <= 'f') value = 10 + (c - 'a');
if (value >= 0) {
if (i % 2 == 1) {
outBytes[o++] = (outByte << 4) | value;
outByte = 0;
} else {
outByte = value;
}
} else {
if (o != 0) break;
}
}
NSData *a = [[NSData alloc] initWithBytesNoCopy:outBytes length:o freeWhenDone:YES];
NSString* newStr = [NSString stringWithUTF8String:[a bytes]];
return newStr;
}
I want the same in Swift. can any body translate this code in swift.
or is there any easy way to do this in swift.
Please help
Rob Rob
Answer
This is my hex string to NSData routine:
extension String {
/// Create NSData from hexadecimal string representation
///
/// This takes a hexadecimal representation and creates a NSData object. Note, if the string has any spaces or non-hex characters (e.g. starts with '<' and with a '>'), those are ignored and only hex characters are processed.
///
/// The use of `strtoul` inspired by Martin R at http://stackoverflow.com/a/26284562/1271826
///
/// - returns: NSData represented by this hexadecimal string.
func dataFromHexadecimalString() -> NSData? {
let data = NSMutableData(capacity: characters.count / 2)
let regex = try! NSRegularExpression(pattern: "[0-9a-f]{1,2}", options: .CaseInsensitive)
regex.enumerateMatchesInString(self, options: [], range: NSMakeRange(0, characters.count)) { match, flags, stop in
let byteString = (self as NSString).substringWithRange(match!.range)
let num = UInt8(byteString.withCString { strtoul($0, nil, 16) })
data?.appendBytes([num], length: 1)
}
return data
}
}
Note, the above is written for Swift 2.0. See the revision history of this answer if you want to see Swift 1.2 or 1.1 renditions.
And for the sake of completeness, this is my NSData to hex string routine:
extension NSData {
/// Create hexadecimal string representation of NSData object.
///
/// - returns: String representation of this NSData object.
func hexadecimalString() -> String {
var string = ""
var byte: UInt8 = 0
for i in 0 ..< length {
getBytes(&byte, range: NSMakeRange(i, 1))
string += String(format: "%02x", byte)
}
return string
}
}
Note, as shown in the above, I generally only convert between hexadecimal representations and NSData instances (because if the information could have been represented as a string you probably wouldn't have created a hexadecimal representation in the first place). But your original question wanted to convert between hexadecimal representations and String objects, and that might look like so:
extension String {
/// Create NSData from hexadecimal string representation
///
/// This takes a hexadecimal representation and creates a String object from taht. Note, if the string has any spaces, those are removed. Also if the string started with a '<' or ended with a '>', those are removed, too.
///
/// - parameter encoding: The NSStringCoding that indicates how the binary data represented by the hex string should be converted to a String.
///
/// - returns: String represented by this hexadecimal string. Returns nil if string contains characters outside the 0-9 and a-f range or if a string cannot be created using the provided encoding
func stringFromHexadecimalStringUsingEncoding(encoding: NSStringEncoding) -> String? {
if let data = dataFromHexadecimalString() {
return String(data: data, encoding: encoding)
}
return nil
}
/// Create hexadecimal string representation of String object.
///
/// - parameter encoding: The NSStringCoding that indicates how the string should be converted to NSData before performing the hexadecimal conversion.
///
/// - returns: String representation of this String object.
func hexadecimalStringUsingEncoding(encoding: NSStringEncoding) -> String? {
let data = dataUsingEncoding(NSUTF8StringEncoding)
return data?.hexadecimalString()
}
}
You could then use the above like so:
let hexString = "68656c6c 6f2c2077 6f726c64"
print(hexString.stringFromHexadecimalStringUsingEncoding(NSUTF8StringEncoding))
Or,
let originalString = "hello, world"
print(originalString.hexadecimalStringUsingEncoding(NSUTF8StringEncoding)) | __label__pos | 0.99585 |
Sponsor:
Your company here, and a link to your site. Click to find out more.
glXCreateContext.3G - Man Page
create a new GLX rendering context
C Specification
GLXContext glXCreateContext( Display *dpy,
XVisualInfo *vis,
GLXContext shareList,
Bool direct )
Parameters
dpy
Specifies the connection to the X server.
vis
Specifies the visual that defines the frame buffer resources available to the rendering context. It is a pointer to an XVisualInfo structure, not a visual ID or a pointer to a Visual.
shareList
Specifies the context with which to share display lists. NULL indicates that no sharing is to take place.
direct
Specifies whether rendering is to be done with a direct connection to the graphics system if possible (True) or through the X server (False).
Description
glXCreateContext creates a GLX rendering context and returns its handle. This context can be used to render into both windows and GLX pixmaps. If glXCreateContext fails to create a rendering context, NULL is returned.
If direct is True, then a direct rendering context is created if the implementation supports direct rendering, if the connection is to an X server that is local, and if a direct rendering context is available. (An implementation may return an indirect context when direct is True). If direct is False, then a rendering context that renders through the X server is always created. Direct rendering provides a performance advantage in some implementations. However, direct rendering contexts cannot be shared outside a single process, and they may be unable to render to GLX pixmaps.
If shareList is not NULL, then all display-list indexes and definitions are shared by context shareList and by the newly created context. An arbitrary number of contexts can share a single display-list space. However, all rendering contexts that share a single display-list space must themselves exist in the same address space. Two rendering contexts share an address space if both are nondirect using the same server, or if both are direct and owned by a single process. Note that in the nondirect case, it is not necessary for the calling threads to share an address space, only for their related rendering contexts to share an address space.
If the GL version is 1.1 or greater, then all texture objects except object 0, are shared by any contexts that share display lists.
Notes
XVisualInfo is defined in Xutil.h. It is a structure that includes visual, visualID, screen, and depth elements.
A process is a single execution environment, implemented in a single address space, consisting of one or more threads.
A thread is one of a set of subprocesses that share a single address space, but maintain separate program counters, stack spaces, and other related global data. A thread that is the only member of its subprocess group is equivalent to a process.
It may not be possible to render to a GLX pixmap with a direct rendering context.
Errors
NULL is returned if execution fails on the client side.
BadMatch is generated if the context to be created would not share the address space or the screen of the context specified by shareList.
BadValue is generated if vis is not a valid visual (for example, if a particular GLX implementation does not support it).
GLXBadContext is generated if shareList is not a GLX context and is not NULL.
BadAlloc is generated if the server does not have enough resources to allocate the new context.
See Also
glXDestroyContext, glXGetConfig, glXIsDirect, glXMakeCurrent
Referenced By
glXChooseVisual.3G(3), glXCopyContext.3G(3), glXCreateGLXPixmap.3G(3), glXDestroyContext.3G(3), glXFreeContextEXT.3G(3), glXGetConfig.3G(3), glXGetContextIDEXT.3G(3), glXGetCurrentContext.3G(3), glXImportContextEXT.3G(3), glXIntro.3G(3), glXIsDirect.3G(3), glXMakeCurrent.3G(3), glXQueryContextInfoEXT.3G(3). | __label__pos | 0.856593 |
Alternate compilation for .BSP
Discussion in 'Mapping Questions & Discussion' started by DrSquishy, Oct 1, 2017.
1. DrSquishy
DrSquishy L8: Fancy Shmancy Member
Messages:
500
Positive Ratings:
106
Is there any way to compile a .bsp file capable of being run by Team Fortress 2 without having to wait the many hours a final map can take? Just want to know because of a massive project I have in mind
2. Fragancia
Fragancia L2: Junior Member
Messages:
83
Positive Ratings:
29
I'm not sure I understand what you are asking here, the compile time depends on how optimized your map is and if it's an expert compile or normal.
If you just want a fast compile to test out something that doesn't need visibility and lighting just set VVIS and VRAD to fast.
3. DrSquishy
DrSquishy L8: Fancy Shmancy Member
Messages:
500
Positive Ratings:
106
This is something...Different. It involves more than just a regular map, and for my project to work, it would require something several times larger than an average map.
4. Crowbar
aa Crowbar perfektoberfest
Messages:
1,429
Positive Ratings:
1,173
How are you expecting a computer to be able tp perform more calculations is the same amount of time?
5. DrSquishy
DrSquishy L8: Fancy Shmancy Member
Messages:
500
Positive Ratings:
106
not sure if I phrased it correctly to how I meant
6. Crowbar
aa Crowbar perfektoberfest
Messages:
1,429
Positive Ratings:
1,173
If you have a much bigger map, you can't just do some magic to still compile it in the time a normal map would compile (aside any factors of complexity).
7. DrSquishy
DrSquishy L8: Fancy Shmancy Member
Messages:
500
Positive Ratings:
106
how far can hints, func_details, area portals and those kind of things go?
8. Crowbar
aa Crowbar perfektoberfest
Messages:
1,429
Positive Ratings:
1,173
These are mostly meant for optimising in-game experience. If the compile time is all you care for, you can go pretty far by eliminating vis nigh entirely, aka not having any world brushes, but you can't really trick VRad.
Now, if you want a sane in-game experience... if your map is an order of magnitute bigger than a regular one, there's not much help for you - see the point I've previously made.
9. Lampenpam
aa Lampenpam
Messages:
1,019
Positive Ratings:
337
If your compile takes an hour, you are doing it wrong. optimze your map. Func_detail is there to make visleafs structure easier to read and reduce the amount of leafs you have to compile. Area_portals don't speed up compile time.
If vrad is supposed to run faster, best you can do is nodraw everything that shouldn't be drawn. Also nodraw faces of entities like func_door which are always covered, because even if they are always covered, they get lightmap informations anyway.
Increasing the lightmap scale on certain faces speeds things up too because vrad is compiling the lightmaps after all. Also mind that faces that are only partly covered by func_detail/entities aren't culled and reciee lightmap data on the entire face.
Noclip around your map and see if you don't have microgaps or just large faces inside your world that can't be seen by players and therefore shouldn't have a texture with lightmaps.
| __label__pos | 0.772915 |
学习
实践
活动
专区
工具
TVP
写文章
搭建 Python 开发环境
首先我们来看看系统中是否已经存在 Python ,并安装一些开发工具包: 安装前准备 查看当前系统中的 Python 版本,可以看到实验室的这台服务器已经安装了 Python 2.6.6 python version 检查 CentOS 版本,我们可以看到这台服务器的 CentOS的版本是 CentOS release 6.8 cat /etc/redhat-release 为了避免后续安装出错,我们先来安装开发工具包 /configure 编译和安装 make && make install 配置 Python 更新系统默认 Python 版本 先把系统默认的旧版 Python 重命名 mv /usr/bin/python /usr/bin/python.old 再删除系统默认的 python-config 软链接 rm -f /usr/bin/python-config 最后创建新版本的 Python 软链接 ln - s /usr/local/bin/python /usr/bin/python ln -s /usr/local/bin/python-config /usr/bin/python-config ln
33020
• 广告
关闭
年末·限时回馈
热卖云产品年终特惠,2核2G轻量应用服务器7.33元/月起,更多上云必备产品助力您轻松上云
• 您找到你想要的搜索结果了吗?
是的
没有找到
Python开发环境搭建
“工欲善其事,必先利其器”,在我们从零开始Python编程学习中,首先做的就是搭建好开发环境,今天这篇文章我们一起学习一下在不同平台下如何搭建Python开发环境。 注意点: 把Python添加到环境变量,这样以后在windows命令提示符下面也可以运行Python: 选择对应的工具: 接着就可以选择你喜爱的IDE就可以啦。 的,所以在安装和使用的过程中一定要注意区分开来 虚拟环境 要更好的构建Python应用,还可以利用Python提供的一个特性——虚拟环境。 要创建虚拟环境,请确定要放置它的目录,并将 venv 模块作为脚本运行目录路径: 虚拟环境创建: python3 -m venv tutorial-env 如果 tutorial-env 目录不存在,它将为你创建一个 关闭虚拟环境: deactivate 结束语 到目前为止,我们已经构建好了我们Python应用所需要的环境,不如现在就开始Python编码之旅吧~
18320
配置Python开发环境
安装开发包 https://www.python.org/ 注:x86适用Intel处理器,X86-64适用AMD处理器(选择对应的包,避免出现兼容性问题) ? 建议在win10的机子上安装可执行文件(win10安装pip会遇到很多依赖问题,使用可执行文件可以顺带把pip安装了) 运行cmd检测环境变量是否生效: C:\Users\noneplus>python IDEA插件 Jetbrains有基于Python开发的PyCharm,但之前开发Java的时候用惯了IDEA,就懒的装了,在IDEA上整个插件就行了。 插件下载地址:https://plugins.jetbrains.com/plugin/631-python/versions ? 下载对应版本就行,然后在设置中的插件选项安装。 创建一个新项目,其中Python插件已经生效: ? 验证编译运行环境 创建项目前配置SDK ? 运行一个hello,world ?
19510
Python集成开发环境
Python语言简单易学,功能强大,由于有丰富的第三方库,使得我们可以站在巨人的肩膀上,用Python来解决问题效率极高,广泛地用于Web开发、系统运维、网络爬虫、科学技术、机器学习、数据分析、 Python开发工具有很多,除了Python自带的IDLE,还有Vim、Pycharm、Sublime Text、Atom、VSCode、Eclipse等等。 对于初学者,到底选用什么样的开发环境呢? Pycharm:如果拿不定注意,建议选用Pycharm社区版,完全开源免费,集成了Python开发所需的各种工具和特性,真是“一旦拥有,别无所求”。 ,更好的支持企业级的开发。 网友评价:"您可以在一个位置访问命令行、连接到数据库、创建虚拟环境以及管理版本控制系统,从而避免在窗口之间不断切换,从而节省时间。"
53320
python开发环境搭建
python开发环境是比较简单的,本来也没打算要写,不过现在想想还是写出来,一方面保证来我博客上python内容的完整性,一方面也可能有些人真的不会,毕竟我是用了很多其他语言之后才转到python的。 对于linux用户来说,系统自带的就有python了,根本不用安装,但如果你需要其他版本的python,你就自己在下载一个,不过要注意的是不要卸载系统自带的python,因为linux中有很多程序都是python /去下载,然后安装完之后,添加python的安装路径到环境变量的path中。 有了运行环境,接着就是编辑器了,对于一个熟练的python程序员随便拿什么写都行,不过既然你看到这说明你应该是初学者,所以你可以用Komodo Edit,地址:http://www.activestate.com python环境搭建貌似真没什么说的。 对了,忘了一个经典的东西,python环境搭好之后首先要做的就是终端(cmd),输入python,然后print 'hello world!'。
52440
打造Python开发环境
设想一下,当用户在买你用MATLAB开发的产品的同时,还要买MATLAB才行。Python就不一样了,因为它是开源的,买Python开发的产品,就不用花那冤枉钱了。 Python虽是开源的,但也有麻烦之处。其中比较头疼的就是各种扩展库的管理和Python不同版本的问题,这个问题在Windows系统最为凸出。 Anaconda是一个用于科学计算的Python发行版,可以简单的理解为这是一个打包的集合,里面预装好了conda、某个版本的python、众多packages、科学计算工具等等。 Anaconda支持 Linux, Mac, Windows系统,提供了包管理与环境管理的功能,可以很方便地解决多版本python并存、切换以及各种第三方包安装问题。 ,其中也包括集成开发环境(IDE)Spyder。
51240
搭建python开发环境-c++教程 如何搭建Python开发环境
如何搭建Python开发环境? 感谢您的邀请。 构建Python环境有三个主要平台:、MAC和Linux。当然搭建python开发环境,有些是直接在手机上运行的。 要测试python安装是否成功,请打开命令行并输入python命令 至此,开发环境已经建立,希望能对您有所帮助。 如何在Visual Studio Code中安装Python库? 开发python环境很多。最初,python开发环境是在vs2013上构建的,但是vs2013每次启动都会占用太多内存(太强大)。这是vs代码,很轻很酷。是时候尝试开发python了。 单击下面的visual studio code1.9安装教程 直接构建python环境:1。 打开vs代码,按F1或Ctrl Shift p打开命令行搭建python开发环境,然后输入ext[ 2。输入Python并选择第一个。这是最常用的,并支持自动代码完成等功能。
11610
VSCode配置Python开发环境
不管是用来写 css,php,c/c++ 都是不错的选择,用 VSCode 来编写 Python,也是相当的好用的。 所以,今天我们就来讲讲,怎么在 VScode 上配置 Python 开发环境。 在这里我推荐大家选择 Python3,而不是 Python2.7。 软件安装 安装 Python 首先,我们安装 Python 到自己的电脑里。切记,一定要记得勾选添加环境变量。 然后,我们需要检查下 python 是否安装成功。 检查方法如下: 打开 cmd,输入 python,点击回车。 输入 import this,欣赏下 python 之禅。 配置 VSCode 安装 Python 插件 打开 VScode,按下快捷键 Ctrl+Shift+X,进入插件管理页面。 在搜索栏输入 python。 选择插件,点击安装。 配置完成 到这里,整个 VSCode 编写 Python开发环境就已经全部配置完成了。 代码格式化工具 yapf: 在 VSCode 中按下快捷键 Alt+Shift+F 即可自动格式化代码。
35710
Python:灵活的开发环境
开发环境非常灵活,得益于可以创建虚拟环境。 难道全局地安装完 python 支持包然后直接 pip 再安装一波依赖包就不可以开发 python 程序了吗? 明显是可以的,而且可以运行的非常好。 创建 创建虚拟环境时,使用对于 python2 和 python3 的配置细节不一样,但是思路是一样的。 python2 需要通过第三方包 virtualenv 来创建虚拟环境。 在自己的工程目录里启动执行指令 // windows cmd python -m venv venv 启动 创建好虚拟环境后,开发和维护这个工程时,希望工程依赖的 python 版本和依赖包等能独立被管理起来 启动开发调试或者安装第三方依赖包之前,启动环境仅需要执行在创建环境时自动生成的脚本。 // windows cmd .
11130
HBuilder集成Python开发环境
HBuilder集成Python开发环境 hbuilder是国内开发的一款比较优秀的工具下面怎么讲集成python(window os): 下载python:https://www.python.org /getit/ 安装python 在HBuilder里安装插件 *配置运行环境 建立python项目 helloword开始 ---- 下载python ? 安装python python-2.7.11.msi :简单点运行直接下一步下一步傻瓜安装喽 1.安装好了之后在你的环境变量里配置python ? 验证一下你的环境是否ok cmd ? 目前为止很cool。 在HBuilder里安装插件-PyDev **hbuilder–>工具–>插件安装 找到pydev–>选择–>点击安装。 或者你点击应用 开始建立一个python项目 ? 如果你这里没有pydev project 可以在 其他 里面找pydev helloword code ? 愉快的结束了
1.4K10
Python:灵活的开发环境
开发环境非常灵活,得益于可以创建虚拟环境。 难道全局地安装完 python 支持包然后直接 pip 再安装一波依赖包就不可以开发 python 程序了吗?明显是可以的,而且可以运行的非常好。 创建创建虚拟环境时,使用对于 python2 和 python3 的配置细节不一样,但是思路是一样的。python2 需要通过第三方包 virtualenv 来创建虚拟环境。 在自己的工程目录里启动执行指令// windows cmdpython -m venv venv启动创建好虚拟环境后,开发和维护这个工程时,希望工程依赖的 python 版本和依赖包等能独立被管理起来, 启动开发调试或者安装第三方依赖包之前,启动环境仅需要执行在创建环境时自动生成的脚本。// windows cmd.
13370
mac系统pycharm配置python环境_mac python开发环境
PATH for Python 3.7 # Python3的环境变量# The original version is saved in .bash_profile.pysavePATH="/Library – 虚拟环境、数据库的配置可以不用配置(不写上即可,没有安装写上会出错) 1234567891011121314151617 # Setting PATH for Python 3.7 # Python3 的环境变量# The original version is saved in .bash_profile.pysavePATH="/Library/Frameworks/Python.framework /Versions/3.7/bin:${PATH}"export PATH# ———————————下面的虚拟环境、数据库的配置可以不用配置(不写上即可,没有安装写上会出错)—————————————— ———# Setting virtualenv PATH for Python 3.7 # 虚拟环境的配置export WORKON_HOME='~/workspace'export VIRTUALENVWRAPPER_SCRIPT
18950
扫码关注腾讯云开发者
领取腾讯云代金券 | __label__pos | 0.843547 |
/[escript]/trunk/escript/src/Data.h
ViewVC logotype
Contents of /trunk/escript/src/Data.h
Parent Directory Parent Directory | Revision Log Revision Log
Revision 2783 - (show annotations)
Thu Nov 26 05:07:33 2009 UTC (10 years, 2 months ago) by lgao
File MIME type: text/plain
File size: 92080 byte(s)
process C_TensorBinaryOperation at the level of a whole sample rather than
each datapoint when an Expanded data is operating with a Constant data. This
could improve the efficiency of the non-lazy version escipt.
1
2 /*******************************************************
3 *
4 * Copyright (c) 2003-2009 by University of Queensland
5 * Earth Systems Science Computational Center (ESSCC)
6 * http://www.uq.edu.au/esscc
7 *
8 * Primary Business: Queensland, Australia
9 * Licensed under the Open Software License version 3.0
10 * http://www.opensource.org/licenses/osl-3.0.php
11 *
12 *******************************************************/
13
14
15 /** \file Data.h */
16
17 #ifndef DATA_H
18 #define DATA_H
19 #include "system_dep.h"
20
21 #include "DataTypes.h"
22 #include "DataAbstract.h"
23 #include "DataAlgorithm.h"
24 #include "FunctionSpace.h"
25 #include "BinaryOp.h"
26 #include "UnaryOp.h"
27 #include "DataException.h"
28
29
30 extern "C" {
31 #include "DataC.h"
32 //#include <omp.h>
33 }
34
35 #ifdef _OPENMP
36 #include <omp.h>
37 #endif
38
39 #include "esysmpi.h"
40 #include <string>
41 #include <algorithm>
42 #include <sstream>
43
44 #include <boost/shared_ptr.hpp>
45 #include <boost/python/object.hpp>
46 #include <boost/python/tuple.hpp>
47
48 namespace escript {
49
50 //
51 // Forward declaration for various implementations of Data.
52 class DataConstant;
53 class DataTagged;
54 class DataExpanded;
55 class DataLazy;
56
57 /**
58 \brief
59 Data represents a collection of datapoints.
60
61 Description:
62 Internally, the datapoints are actually stored by a DataAbstract object.
63 The specific instance of DataAbstract used may vary over the lifetime
64 of the Data object.
65 Some methods on this class return references (eg getShape()).
66 These references should not be used after an operation which changes the underlying DataAbstract object.
67 Doing so will lead to invalid memory access.
68 This should not affect any methods exposed via boost::python.
69 */
70 class Data {
71
72 public:
73
74 // These typedefs allow function names to be cast to pointers
75 // to functions of the appropriate type when calling unaryOp etc.
76 typedef double (*UnaryDFunPtr)(double);
77 typedef double (*BinaryDFunPtr)(double,double);
78
79
80 /**
81 Constructors.
82 */
83
84 /**
85 \brief
86 Default constructor.
87 Creates a DataEmpty object.
88 */
89 ESCRIPT_DLL_API
90 Data();
91
92 /**
93 \brief
94 Copy constructor.
95 WARNING: Only performs a shallow copy.
96 */
97 ESCRIPT_DLL_API
98 Data(const Data& inData);
99
100 /**
101 \brief
102 Constructor from another Data object. If "what" is different from the
103 function space of inData the inData are tried to be interpolated to what,
104 otherwise a shallow copy of inData is returned.
105 */
106 ESCRIPT_DLL_API
107 Data(const Data& inData,
108 const FunctionSpace& what);
109
110 /**
111 \brief Copy Data from an existing vector
112 */
113
114 ESCRIPT_DLL_API
115 Data(const DataTypes::ValueType& value,
116 const DataTypes::ShapeType& shape,
117 const FunctionSpace& what=FunctionSpace(),
118 bool expanded=false);
119
120 /**
121 \brief
122 Constructor which creates a Data with points having the specified shape.
123
124 \param value - Input - Single value applied to all Data.
125 \param dataPointShape - Input - The shape of each data point.
126 \param what - Input - A description of what this data represents.
127 \param expanded - Input - Flag, if true fill the entire container with
128 the given value. Otherwise a more efficient storage
129 mechanism will be used.
130 */
131 ESCRIPT_DLL_API
132 Data(double value,
133 const DataTypes::ShapeType& dataPointShape=DataTypes::ShapeType(),
134 const FunctionSpace& what=FunctionSpace(),
135 bool expanded=false);
136
137 /**
138 \brief
139 Constructor which performs a deep copy of a region from another Data object.
140
141 \param inData - Input - Input Data object.
142 \param region - Input - Region to copy.
143 */
144 ESCRIPT_DLL_API
145 Data(const Data& inData,
146 const DataTypes::RegionType& region);
147
148 /**
149 \brief
150 Constructor which copies data from any object that can be treated like a python array/sequence.
151
152 \param value - Input - Input data.
153 \param what - Input - A description of what this data represents.
154 \param expanded - Input - Flag, if true fill the entire container with
155 the value. Otherwise a more efficient storage
156 mechanism will be used.
157 */
158 ESCRIPT_DLL_API
159 Data(const boost::python::object& value,
160 const FunctionSpace& what=FunctionSpace(),
161 bool expanded=false);
162
163 /**
164 \brief
165 Constructor which creates a DataConstant.
166 Copies data from any object that can be treated like a python array/sequence.
167 All other parameters are copied from other.
168
169 \param value - Input - Input data.
170 \param other - Input - contains all other parameters.
171 */
172 ESCRIPT_DLL_API
173 Data(const boost::python::object& value,
174 const Data& other);
175
176 /**
177 \brief
178 Constructor which creates a DataConstant of "shape" with constant value.
179 */
180 ESCRIPT_DLL_API
181 Data(double value,
182 const boost::python::tuple& shape=boost::python::make_tuple(),
183 const FunctionSpace& what=FunctionSpace(),
184 bool expanded=false);
185
186
187
188 /**
189 \brief Create a Data using an existing DataAbstract. Warning: The new object assumes ownership of the pointer!
190 Once you have passed the pointer, do not delete it.
191 */
192 ESCRIPT_DLL_API
193 explicit Data(DataAbstract* underlyingdata);
194
195 /**
196 \brief Create a Data based on the supplied DataAbstract
197 */
198 ESCRIPT_DLL_API
199 explicit Data(DataAbstract_ptr underlyingdata);
200
201 /**
202 \brief
203 Destructor
204 */
205 ESCRIPT_DLL_API
206 ~Data();
207
208 /**
209 \brief Make this object a deep copy of "other".
210 */
211 ESCRIPT_DLL_API
212 void
213 copy(const Data& other);
214
215 /**
216 \brief Return a pointer to a deep copy of this object.
217 */
218 ESCRIPT_DLL_API
219 Data
220 copySelf();
221
222
223 /**
224 \brief produce a delayed evaluation version of this Data.
225 */
226 ESCRIPT_DLL_API
227 Data
228 delay();
229
230 /**
231 \brief convert the current data into lazy data.
232 */
233 ESCRIPT_DLL_API
234 void
235 delaySelf();
236
237
238 /**
239 Member access methods.
240 */
241
242 /**
243 \brief
244 switches on update protection
245
246 */
247 ESCRIPT_DLL_API
248 void
249 setProtection();
250
251 /**
252 \brief
253 Returns true, if the data object is protected against update
254
255 */
256 ESCRIPT_DLL_API
257 bool
258 isProtected() const;
259
260
261 /**
262 \brief
263 Return the value of a data point as a python tuple.
264 */
265 ESCRIPT_DLL_API
266 const boost::python::object
267 getValueOfDataPointAsTuple(int dataPointNo);
268
269 /**
270 \brief
271 sets the values of a data-point from a python object on this process
272 */
273 ESCRIPT_DLL_API
274 void
275 setValueOfDataPointToPyObject(int dataPointNo, const boost::python::object& py_object);
276
277 /**
278 \brief
279 sets the values of a data-point from a array-like object on this process
280 */
281 ESCRIPT_DLL_API
282 void
283 setValueOfDataPointToArray(int dataPointNo, const boost::python::object&);
284
285 /**
286 \brief
287 sets the values of a data-point on this process
288 */
289 ESCRIPT_DLL_API
290 void
291 setValueOfDataPoint(int dataPointNo, const double);
292
293 /**
294 \brief Return a data point across all processors as a python tuple.
295 */
296 ESCRIPT_DLL_API
297 const boost::python::object
298 getValueOfGlobalDataPointAsTuple(int procNo, int dataPointNo);
299
300 /**
301 \brief
302 Return the tag number associated with the given data-point.
303
304 */
305 ESCRIPT_DLL_API
306 int
307 getTagNumber(int dpno);
308
309 /**
310 \brief
311 Return the C wrapper for the Data object.
312 */
313 ESCRIPT_DLL_API
314 escriptDataC
315 getDataC();
316
317
318
319 /**
320 \brief
321 Return the C wrapper for the Data object - const version.
322 */
323 ESCRIPT_DLL_API
324 escriptDataC
325 getDataC() const;
326
327
328 /**
329 \brief
330 Write the data as a string. For large amounts of data, a summary is printed.
331 */
332 ESCRIPT_DLL_API
333 std::string
334 toString() const;
335
336 /**
337 \brief
338 Whatever the current Data type make this into a DataExpanded.
339 */
340 ESCRIPT_DLL_API
341 void
342 expand();
343
344 /**
345 \brief
346 If possible convert this Data to DataTagged. This will only allow
347 Constant data to be converted to tagged. An attempt to convert
348 Expanded data to tagged will throw an exception.
349 */
350 ESCRIPT_DLL_API
351 void
352 tag();
353
354 /**
355 \brief If this data is lazy, then convert it to ready data.
356 What type of ready data depends on the expression. For example, Constant+Tagged==Tagged.
357 */
358 ESCRIPT_DLL_API
359 void
360 resolve();
361
362
363 /**
364 \brief Ensures data is ready for write access.
365 This means that the data will be resolved if lazy and will be copied if shared with another Data object.
366 \warning This method should only be called in single threaded sections of code. (It modifies m_data).
367 Do not create any Data objects from this one between calling requireWrite and getSampleDataRW.
368 Doing so might introduce additional sharing.
369 */
370 ESCRIPT_DLL_API
371 void
372 requireWrite();
373
374 /**
375 \brief
376 Return true if this Data is expanded.
377 \note To determine if a sample will contain separate values for each datapoint. Use actsExpanded instead.
378 */
379 ESCRIPT_DLL_API
380 bool
381 isExpanded() const;
382
383 /**
384 \brief
385 Return true if this Data is expanded or resolves to expanded.
386 That is, if it has a separate value for each datapoint in the sample.
387 */
388 ESCRIPT_DLL_API
389 bool
390 actsExpanded() const;
391
392
393 /**
394 \brief
395 Return true if this Data is tagged.
396 */
397 ESCRIPT_DLL_API
398 bool
399 isTagged() const;
400
401 /**
402 \brief
403 Return true if this Data is constant.
404 */
405 ESCRIPT_DLL_API
406 bool
407 isConstant() const;
408
409 /**
410 \brief Return true if this Data is lazy.
411 */
412 ESCRIPT_DLL_API
413 bool
414 isLazy() const;
415
416 /**
417 \brief Return true if this data is ready.
418 */
419 ESCRIPT_DLL_API
420 bool
421 isReady() const;
422
423 /**
424 \brief
425 Return true if this Data holds an instance of DataEmpty. This is _not_ the same as asking if the object
426 contains datapoints.
427 */
428 ESCRIPT_DLL_API
429 bool
430 isEmpty() const;
431
432 /**
433 \brief
434 Return the function space.
435 */
436 ESCRIPT_DLL_API
437 inline
438 const FunctionSpace&
439 getFunctionSpace() const
440 {
441 return m_data->getFunctionSpace();
442 }
443
444 /**
445 \brief
446 Return a copy of the function space.
447 */
448 ESCRIPT_DLL_API
449 const FunctionSpace
450 getCopyOfFunctionSpace() const;
451
452 /**
453 \brief
454 Return the domain.
455 */
456 ESCRIPT_DLL_API
457 inline
458 // const AbstractDomain&
459 const_Domain_ptr
460 getDomain() const
461 {
462 return getFunctionSpace().getDomain();
463 }
464
465
466 /**
467 \brief
468 Return the domain.
469 TODO: For internal use only. This should be removed.
470 */
471 ESCRIPT_DLL_API
472 inline
473 // const AbstractDomain&
474 Domain_ptr
475 getDomainPython() const
476 {
477 return getFunctionSpace().getDomainPython();
478 }
479
480 /**
481 \brief
482 Return a copy of the domain.
483 */
484 ESCRIPT_DLL_API
485 const AbstractDomain
486 getCopyOfDomain() const;
487
488 /**
489 \brief
490 Return the rank of the point data.
491 */
492 ESCRIPT_DLL_API
493 inline
494 unsigned int
495 getDataPointRank() const
496 {
497 return m_data->getRank();
498 }
499
500 /**
501 \brief
502 Return the number of data points
503 */
504 ESCRIPT_DLL_API
505 inline
506 int
507 getNumDataPoints() const
508 {
509 return getNumSamples() * getNumDataPointsPerSample();
510 }
511 /**
512 \brief
513 Return the number of samples.
514 */
515 ESCRIPT_DLL_API
516 inline
517 int
518 getNumSamples() const
519 {
520 return m_data->getNumSamples();
521 }
522
523 /**
524 \brief
525 Return the number of data points per sample.
526 */
527 ESCRIPT_DLL_API
528 inline
529 int
530 getNumDataPointsPerSample() const
531 {
532 return m_data->getNumDPPSample();
533 }
534
535
536 /**
537 \brief
538 Return the number of values in the shape for this object.
539 */
540 ESCRIPT_DLL_API
541 int
542 getNoValues() const
543 {
544 return m_data->getNoValues();
545 }
546
547
548 /**
549 \brief
550 dumps the object into a netCDF file
551 */
552 ESCRIPT_DLL_API
553 void
554 dump(const std::string fileName) const;
555
556 /**
557 \brief returns the values of the object as a list of tuples (one for each datapoint).
558
559 \param scalarastuple If true, scalar data will produce single valued tuples [(1,) (2,) ...]
560 If false, the result is a list of scalars [1, 2, ...]
561 */
562 ESCRIPT_DLL_API
563 const boost::python::object
564 toListOfTuples(bool scalarastuple=true);
565
566
567 /**
568 \brief
569 Return the sample data for the given sample no. This is not the
570 preferred interface but is provided for use by C code.
571 The bufferg parameter is only required for LazyData.
572 \param sampleNo - Input - the given sample no.
573 \return pointer to the sample data.
574 */
575 ESCRIPT_DLL_API
576 inline
577 const DataAbstract::ValueType::value_type*
578 getSampleDataRO(DataAbstract::ValueType::size_type sampleNo);
579
580
581 /**
582 \brief
583 Return the sample data for the given sample no. This is not the
584 preferred interface but is provided for use by C code.
585 \param sampleNo - Input - the given sample no.
586 \return pointer to the sample data.
587 */
588 ESCRIPT_DLL_API
589 inline
590 DataAbstract::ValueType::value_type*
591 getSampleDataRW(DataAbstract::ValueType::size_type sampleNo);
592
593
594 /**
595 \brief
596 Return the sample data for the given tag. If an attempt is made to
597 access data that isn't tagged an exception will be thrown.
598 \param tag - Input - the tag key.
599 */
600 ESCRIPT_DLL_API
601 inline
602 DataAbstract::ValueType::value_type*
603 getSampleDataByTag(int tag)
604 {
605 return m_data->getSampleDataByTag(tag);
606 }
607
608 /**
609 \brief
610 Return a reference into the DataVector which points to the specified data point.
611 \param sampleNo - Input -
612 \param dataPointNo - Input -
613 */
614 ESCRIPT_DLL_API
615 DataTypes::ValueType::const_reference
616 getDataPointRO(int sampleNo, int dataPointNo);
617
618 /**
619 \brief
620 Return a reference into the DataVector which points to the specified data point.
621 \param sampleNo - Input -
622 \param dataPointNo - Input -
623 */
624 ESCRIPT_DLL_API
625 DataTypes::ValueType::reference
626 getDataPointRW(int sampleNo, int dataPointNo);
627
628
629
630 /**
631 \brief
632 Return the offset for the given sample and point within the sample
633 */
634 ESCRIPT_DLL_API
635 inline
636 DataTypes::ValueType::size_type
637 getDataOffset(int sampleNo,
638 int dataPointNo)
639 {
640 return m_data->getPointOffset(sampleNo,dataPointNo);
641 }
642
643 /**
644 \brief
645 Return a reference to the data point shape.
646 */
647 ESCRIPT_DLL_API
648 inline
649 const DataTypes::ShapeType&
650 getDataPointShape() const
651 {
652 return m_data->getShape();
653 }
654
655 /**
656 \brief
657 Return the data point shape as a tuple of integers.
658 */
659 ESCRIPT_DLL_API
660 const boost::python::tuple
661 getShapeTuple() const;
662
663 /**
664 \brief
665 Return the size of the data point. It is the product of the
666 data point shape dimensions.
667 */
668 ESCRIPT_DLL_API
669 int
670 getDataPointSize() const;
671
672 /**
673 \brief
674 Return the number of doubles stored for this Data.
675 */
676 ESCRIPT_DLL_API
677 DataTypes::ValueType::size_type
678 getLength() const;
679
680 /**
681 \brief Return true if this object contains no samples.
682 This is not the same as isEmpty()
683 */
684 ESCRIPT_DLL_API
685 bool
686 hasNoSamples() const
687 {
688 return getLength()==0;
689 }
690
691 /**
692 \brief
693 Assign the given value to the tag assocciated with name. Implicitly converts this
694 object to type DataTagged. Throws an exception if this object
695 cannot be converted to a DataTagged object or name cannot be mapped onto a tag key.
696 \param name - Input - name of tag.
697 \param value - Input - Value to associate with given key.
698 */
699 ESCRIPT_DLL_API
700 void
701 setTaggedValueByName(std::string name,
702 const boost::python::object& value);
703
704 /**
705 \brief
706 Assign the given value to the tag. Implicitly converts this
707 object to type DataTagged if it is constant.
708
709 \param tagKey - Input - Integer key.
710 \param value - Input - Value to associate with given key.
711 ==>*
712 */
713 ESCRIPT_DLL_API
714 void
715 setTaggedValue(int tagKey,
716 const boost::python::object& value);
717
718 /**
719 \brief
720 Assign the given value to the tag. Implicitly converts this
721 object to type DataTagged if it is constant.
722
723 \param tagKey - Input - Integer key.
724 \param pointshape - Input - The shape of the value parameter
725 \param value - Input - Value to associate with given key.
726 \param dataOffset - Input - Offset of the begining of the point within the value parameter
727 */
728 ESCRIPT_DLL_API
729 void
730 setTaggedValueFromCPP(int tagKey,
731 const DataTypes::ShapeType& pointshape,
732 const DataTypes::ValueType& value,
733 int dataOffset=0);
734
735
736
737 /**
738 \brief
739 Copy other Data object into this Data object where mask is positive.
740 */
741 ESCRIPT_DLL_API
742 void
743 copyWithMask(const Data& other,
744 const Data& mask);
745
746 /**
747 Data object operation methods and operators.
748 */
749
750 /**
751 \brief
752 set all values to zero
753 *
754 */
755 ESCRIPT_DLL_API
756 void
757 setToZero();
758
759 /**
760 \brief
761 Interpolates this onto the given functionspace and returns
762 the result as a Data object.
763 *
764 */
765 ESCRIPT_DLL_API
766 Data
767 interpolate(const FunctionSpace& functionspace) const;
768
769
770 ESCRIPT_DLL_API
771 Data
772 interpolateFromTable2D(const WrappedArray& table, double Amin, double Astep,
773 double undef, Data& B, double Bmin, double Bstep,bool check_boundaries);
774
775 ESCRIPT_DLL_API
776 Data
777 interpolateFromTable1D(const WrappedArray& table, double Amin, double Astep,
778 double undef,bool check_boundaries);
779
780
781
782
783 ESCRIPT_DLL_API
784 Data
785 interpolateFromTable2DP(boost::python::object table, double Amin, double Astep,
786 Data& B, double Bmin, double Bstep, double undef,bool check_boundaries);
787
788 ESCRIPT_DLL_API
789 Data
790 interpolateFromTable1DP(boost::python::object table, double Amin, double Astep,
791 double undef,bool check_boundaries);
792
793 /**
794 \brief
795 Calculates the gradient of the data at the data points of functionspace.
796 If functionspace is not present the function space of Function(getDomain()) is used.
797 *
798 */
799 ESCRIPT_DLL_API
800 Data
801 gradOn(const FunctionSpace& functionspace) const;
802
803 ESCRIPT_DLL_API
804 Data
805 grad() const;
806
807 /**
808 \brief
809 Calculate the integral over the function space domain as a python tuple.
810 */
811 ESCRIPT_DLL_API
812 boost::python::object
813 integrateToTuple_const() const;
814
815
816 /**
817 \brief
818 Calculate the integral over the function space domain as a python tuple.
819 */
820 ESCRIPT_DLL_API
821 boost::python::object
822 integrateToTuple();
823
824
825
826 /**
827 \brief
828 Returns 1./ Data object
829 *
830 */
831 ESCRIPT_DLL_API
832 Data
833 oneOver() const;
834 /**
835 \brief
836 Return a Data with a 1 for +ive values and a 0 for 0 or -ive values.
837 *
838 */
839 ESCRIPT_DLL_API
840 Data
841 wherePositive() const;
842
843 /**
844 \brief
845 Return a Data with a 1 for -ive values and a 0 for +ive or 0 values.
846 *
847 */
848 ESCRIPT_DLL_API
849 Data
850 whereNegative() const;
851
852 /**
853 \brief
854 Return a Data with a 1 for +ive or 0 values and a 0 for -ive values.
855 *
856 */
857 ESCRIPT_DLL_API
858 Data
859 whereNonNegative() const;
860
861 /**
862 \brief
863 Return a Data with a 1 for -ive or 0 values and a 0 for +ive values.
864 *
865 */
866 ESCRIPT_DLL_API
867 Data
868 whereNonPositive() const;
869
870 /**
871 \brief
872 Return a Data with a 1 for 0 values and a 0 for +ive or -ive values.
873 *
874 */
875 ESCRIPT_DLL_API
876 Data
877 whereZero(double tol=0.0) const;
878
879 /**
880 \brief
881 Return a Data with a 0 for 0 values and a 1 for +ive or -ive values.
882 *
883 */
884 ESCRIPT_DLL_API
885 Data
886 whereNonZero(double tol=0.0) const;
887
888 /**
889 \brief
890 Return the maximum absolute value of this Data object.
891
892 The method is not const because lazy data needs to be expanded before Lsup can be computed.
893 The _const form can be used when the Data object is const, however this will only work for
894 Data which is not Lazy.
895
896 For Data which contain no samples (or tagged Data for which no tags in use have a value)
897 zero is returned.
898 */
899 ESCRIPT_DLL_API
900 double
901 Lsup();
902
903 ESCRIPT_DLL_API
904 double
905 Lsup_const() const;
906
907
908 /**
909 \brief
910 Return the maximum value of this Data object.
911
912 The method is not const because lazy data needs to be expanded before sup can be computed.
913 The _const form can be used when the Data object is const, however this will only work for
914 Data which is not Lazy.
915
916 For Data which contain no samples (or tagged Data for which no tags in use have a value)
917 a large negative value is returned.
918 */
919 ESCRIPT_DLL_API
920 double
921 sup();
922
923 ESCRIPT_DLL_API
924 double
925 sup_const() const;
926
927
928 /**
929 \brief
930 Return the minimum value of this Data object.
931
932 The method is not const because lazy data needs to be expanded before inf can be computed.
933 The _const form can be used when the Data object is const, however this will only work for
934 Data which is not Lazy.
935
936 For Data which contain no samples (or tagged Data for which no tags in use have a value)
937 a large positive value is returned.
938 */
939 ESCRIPT_DLL_API
940 double
941 inf();
942
943 ESCRIPT_DLL_API
944 double
945 inf_const() const;
946
947
948
949 /**
950 \brief
951 Return the absolute value of each data point of this Data object.
952 *
953 */
954 ESCRIPT_DLL_API
955 Data
956 abs() const;
957
958 /**
959 \brief
960 Return the maximum value of each data point of this Data object.
961 *
962 */
963 ESCRIPT_DLL_API
964 Data
965 maxval() const;
966
967 /**
968 \brief
969 Return the minimum value of each data point of this Data object.
970 *
971 */
972 ESCRIPT_DLL_API
973 Data
974 minval() const;
975
976 /**
977 \brief
978 Return the (sample number, data-point number) of the data point with
979 the minimum component value in this Data object.
980 \note If you are working in python, please consider using Locator
981 instead of manually manipulating process and point IDs.
982 */
983 ESCRIPT_DLL_API
984 const boost::python::tuple
985 minGlobalDataPoint() const;
986
987 /**
988 \brief
989 Return the (sample number, data-point number) of the data point with
990 the minimum component value in this Data object.
991 \note If you are working in python, please consider using Locator
992 instead of manually manipulating process and point IDs.
993 */
994 ESCRIPT_DLL_API
995 const boost::python::tuple
996 maxGlobalDataPoint() const;
997
998
999
1000 /**
1001 \brief
1002 Return the sign of each data point of this Data object.
1003 -1 for negative values, zero for zero values, 1 for positive values.
1004 *
1005 */
1006 ESCRIPT_DLL_API
1007 Data
1008 sign() const;
1009
1010 /**
1011 \brief
1012 Return the symmetric part of a matrix which is half the matrix plus its transpose.
1013 *
1014 */
1015 ESCRIPT_DLL_API
1016 Data
1017 symmetric() const;
1018
1019 /**
1020 \brief
1021 Return the nonsymmetric part of a matrix which is half the matrix minus its transpose.
1022 *
1023 */
1024 ESCRIPT_DLL_API
1025 Data
1026 nonsymmetric() const;
1027
1028 /**
1029 \brief
1030 Return the trace of a matrix
1031 *
1032 */
1033 ESCRIPT_DLL_API
1034 Data
1035 trace(int axis_offset) const;
1036
1037 /**
1038 \brief
1039 Transpose each data point of this Data object around the given axis.
1040 *
1041 */
1042 ESCRIPT_DLL_API
1043 Data
1044 transpose(int axis_offset) const;
1045
1046 /**
1047 \brief
1048 Return the eigenvalues of the symmetric part at each data point of this Data object in increasing values.
1049 Currently this function is restricted to rank 2, square shape, and dimension 3.
1050 *
1051 */
1052 ESCRIPT_DLL_API
1053 Data
1054 eigenvalues() const;
1055
1056 /**
1057 \brief
1058 Return the eigenvalues and corresponding eigenvcetors of the symmetric part at each data point of this Data object.
1059 the eigenvalues are ordered in increasing size where eigenvalues with relative difference less than
1060 tol are treated as equal. The eigenvectors are orthogonal, normalized and the sclaed such that the
1061 first non-zero entry is positive.
1062 Currently this function is restricted to rank 2, square shape, and dimension 3
1063 *
1064 */
1065 ESCRIPT_DLL_API
1066 const boost::python::tuple
1067 eigenvalues_and_eigenvectors(const double tol=1.e-12) const;
1068
1069 /**
1070 \brief
1071 swaps the components axis0 and axis1
1072 *
1073 */
1074 ESCRIPT_DLL_API
1075 Data
1076 swapaxes(const int axis0, const int axis1) const;
1077
1078 /**
1079 \brief
1080 Return the error function erf of each data point of this Data object.
1081 *
1082 */
1083 ESCRIPT_DLL_API
1084 Data
1085 erf() const;
1086
1087 /**
1088 \brief
1089 Return the sin of each data point of this Data object.
1090 *
1091 */
1092 ESCRIPT_DLL_API
1093 Data
1094 sin() const;
1095
1096 /**
1097 \brief
1098 Return the cos of each data point of this Data object.
1099 *
1100 */
1101 ESCRIPT_DLL_API
1102 Data
1103 cos() const;
1104
1105 /**
1106 \brief
1107 Return the tan of each data point of this Data object.
1108 *
1109 */
1110 ESCRIPT_DLL_API
1111 Data
1112 tan() const;
1113
1114 /**
1115 \brief
1116 Return the asin of each data point of this Data object.
1117 *
1118 */
1119 ESCRIPT_DLL_API
1120 Data
1121 asin() const;
1122
1123 /**
1124 \brief
1125 Return the acos of each data point of this Data object.
1126 *
1127 */
1128 ESCRIPT_DLL_API
1129 Data
1130 acos() const;
1131
1132 /**
1133 \brief
1134 Return the atan of each data point of this Data object.
1135 *
1136 */
1137 ESCRIPT_DLL_API
1138 Data
1139 atan() const;
1140
1141 /**
1142 \brief
1143 Return the sinh of each data point of this Data object.
1144 *
1145 */
1146 ESCRIPT_DLL_API
1147 Data
1148 sinh() const;
1149
1150 /**
1151 \brief
1152 Return the cosh of each data point of this Data object.
1153 *
1154 */
1155 ESCRIPT_DLL_API
1156 Data
1157 cosh() const;
1158
1159 /**
1160 \brief
1161 Return the tanh of each data point of this Data object.
1162 *
1163 */
1164 ESCRIPT_DLL_API
1165 Data
1166 tanh() const;
1167
1168 /**
1169 \brief
1170 Return the asinh of each data point of this Data object.
1171 *
1172 */
1173 ESCRIPT_DLL_API
1174 Data
1175 asinh() const;
1176
1177 /**
1178 \brief
1179 Return the acosh of each data point of this Data object.
1180 *
1181 */
1182 ESCRIPT_DLL_API
1183 Data
1184 acosh() const;
1185
1186 /**
1187 \brief
1188 Return the atanh of each data point of this Data object.
1189 *
1190 */
1191 ESCRIPT_DLL_API
1192 Data
1193 atanh() const;
1194
1195 /**
1196 \brief
1197 Return the log to base 10 of each data point of this Data object.
1198 *
1199 */
1200 ESCRIPT_DLL_API
1201 Data
1202 log10() const;
1203
1204 /**
1205 \brief
1206 Return the natural log of each data point of this Data object.
1207 *
1208 */
1209 ESCRIPT_DLL_API
1210 Data
1211 log() const;
1212
1213 /**
1214 \brief
1215 Return the exponential function of each data point of this Data object.
1216 *
1217 */
1218 ESCRIPT_DLL_API
1219 Data
1220 exp() const;
1221
1222 /**
1223 \brief
1224 Return the square root of each data point of this Data object.
1225 *
1226 */
1227 ESCRIPT_DLL_API
1228 Data
1229 sqrt() const;
1230
1231 /**
1232 \brief
1233 Return the negation of each data point of this Data object.
1234 *
1235 */
1236 ESCRIPT_DLL_API
1237 Data
1238 neg() const;
1239
1240 /**
1241 \brief
1242 Return the identity of each data point of this Data object.
1243 Simply returns this object unmodified.
1244 *
1245 */
1246 ESCRIPT_DLL_API
1247 Data
1248 pos() const;
1249
1250 /**
1251 \brief
1252 Return the given power of each data point of this Data object.
1253
1254 \param right Input - the power to raise the object to.
1255 *
1256 */
1257 ESCRIPT_DLL_API
1258 Data
1259 powD(const Data& right) const;
1260
1261 /**
1262 \brief
1263 Return the given power of each data point of this boost python object.
1264
1265 \param right Input - the power to raise the object to.
1266 *
1267 */
1268 ESCRIPT_DLL_API
1269 Data
1270 powO(const boost::python::object& right) const;
1271
1272 /**
1273 \brief
1274 Return the given power of each data point of this boost python object.
1275
1276 \param left Input - the bases
1277 *
1278 */
1279
1280 ESCRIPT_DLL_API
1281 Data
1282 rpowO(const boost::python::object& left) const;
1283
1284 /**
1285 \brief
1286 writes the object to a file in the DX file format
1287 */
1288 ESCRIPT_DLL_API
1289 void
1290 saveDX(std::string fileName) const;
1291
1292 /**
1293 \brief
1294 writes the object to a file in the VTK file format
1295 */
1296 ESCRIPT_DLL_API
1297 void
1298 saveVTK(std::string fileName) const;
1299
1300
1301
1302 /**
1303 \brief
1304 Overloaded operator +=
1305 \param right - Input - The right hand side.
1306 *
1307 */
1308 ESCRIPT_DLL_API
1309 Data& operator+=(const Data& right);
1310 ESCRIPT_DLL_API
1311 Data& operator+=(const boost::python::object& right);
1312
1313 ESCRIPT_DLL_API
1314 Data& operator=(const Data& other);
1315
1316 /**
1317 \brief
1318 Overloaded operator -=
1319 \param right - Input - The right hand side.
1320 *
1321 */
1322 ESCRIPT_DLL_API
1323 Data& operator-=(const Data& right);
1324 ESCRIPT_DLL_API
1325 Data& operator-=(const boost::python::object& right);
1326
1327 /**
1328 \brief
1329 Overloaded operator *=
1330 \param right - Input - The right hand side.
1331 *
1332 */
1333 ESCRIPT_DLL_API
1334 Data& operator*=(const Data& right);
1335 ESCRIPT_DLL_API
1336 Data& operator*=(const boost::python::object& right);
1337
1338 /**
1339 \brief
1340 Overloaded operator /=
1341 \param right - Input - The right hand side.
1342 *
1343 */
1344 ESCRIPT_DLL_API
1345 Data& operator/=(const Data& right);
1346 ESCRIPT_DLL_API
1347 Data& operator/=(const boost::python::object& right);
1348
1349 /**
1350 \brief return inverse of matricies.
1351 */
1352 ESCRIPT_DLL_API
1353 Data
1354 matrixInverse() const;
1355
1356 /**
1357 \brief
1358 Returns true if this can be interpolated to functionspace.
1359 */
1360 ESCRIPT_DLL_API
1361 bool
1362 probeInterpolation(const FunctionSpace& functionspace) const;
1363
1364 /**
1365 Data object slicing methods.
1366 */
1367
1368 /**
1369 \brief
1370 Returns a slice from this Data object.
1371
1372 /description
1373 Implements the [] get operator in python.
1374 Calls getSlice.
1375
1376 \param key - Input - python slice tuple specifying
1377 slice to return.
1378 */
1379 ESCRIPT_DLL_API
1380 Data
1381 getItem(const boost::python::object& key) const;
1382
1383 /**
1384 \brief
1385 Copies slice from value into this Data object.
1386
1387 Implements the [] set operator in python.
1388 Calls setSlice.
1389
1390 \param key - Input - python slice tuple specifying
1391 slice to copy from value.
1392 \param value - Input - Data object to copy from.
1393 */
1394 ESCRIPT_DLL_API
1395 void
1396 setItemD(const boost::python::object& key,
1397 const Data& value);
1398
1399 ESCRIPT_DLL_API
1400 void
1401 setItemO(const boost::python::object& key,
1402 const boost::python::object& value);
1403
1404 // These following public methods should be treated as private.
1405
1406 /**
1407 \brief
1408 Perform the given unary operation on every element of every data point in
1409 this Data object.
1410 */
1411 template <class UnaryFunction>
1412 ESCRIPT_DLL_API
1413 inline
1414 void
1415 unaryOp2(UnaryFunction operation);
1416
1417 /**
1418 \brief
1419 Return a Data object containing the specified slice of
1420 this Data object.
1421 \param region - Input - Region to copy.
1422 *
1423 */
1424 ESCRIPT_DLL_API
1425 Data
1426 getSlice(const DataTypes::RegionType& region) const;
1427
1428 /**
1429 \brief
1430 Copy the specified slice from the given value into this
1431 Data object.
1432 \param value - Input - Data to copy from.
1433 \param region - Input - Region to copy.
1434 *
1435 */
1436 ESCRIPT_DLL_API
1437 void
1438 setSlice(const Data& value,
1439 const DataTypes::RegionType& region);
1440
1441 /**
1442 \brief
1443 print the data values to stdout. Used for debugging
1444 */
1445 ESCRIPT_DLL_API
1446 void
1447 print(void);
1448
1449 /**
1450 \brief
1451 return the MPI rank number of the local data
1452 MPI_COMM_WORLD is assumed and the result of MPI_Comm_size()
1453 is returned
1454 */
1455 ESCRIPT_DLL_API
1456 int
1457 get_MPIRank(void) const;
1458
1459 /**
1460 \brief
1461 return the MPI rank number of the local data
1462 MPI_COMM_WORLD is assumed and the result of MPI_Comm_rank()
1463 is returned
1464 */
1465 ESCRIPT_DLL_API
1466 int
1467 get_MPISize(void) const;
1468
1469 /**
1470 \brief
1471 return the MPI rank number of the local data
1472 MPI_COMM_WORLD is assumed and returned.
1473 */
1474 ESCRIPT_DLL_API
1475 MPI_Comm
1476 get_MPIComm(void) const;
1477
1478 /**
1479 \brief
1480 return the object produced by the factory, which is a DataConstant or DataExpanded
1481 TODO Ownership of this object should be explained in doco.
1482 */
1483 ESCRIPT_DLL_API
1484 DataAbstract*
1485 borrowData(void) const;
1486
1487 ESCRIPT_DLL_API
1488 DataAbstract_ptr
1489 borrowDataPtr(void) const;
1490
1491 ESCRIPT_DLL_API
1492 DataReady_ptr
1493 borrowReadyPtr(void) const;
1494
1495
1496
1497 /**
1498 \brief
1499 Return a pointer to the beginning of the datapoint at the specified offset.
1500 TODO Eventually these should be inlined.
1501 \param i - position(offset) in the underlying datastructure
1502 */
1503
1504 ESCRIPT_DLL_API
1505 DataTypes::ValueType::const_reference
1506 getDataAtOffsetRO(DataTypes::ValueType::size_type i);
1507
1508
1509 ESCRIPT_DLL_API
1510 DataTypes::ValueType::reference
1511 getDataAtOffsetRW(DataTypes::ValueType::size_type i);
1512
1513
1514
1515 protected:
1516
1517 private:
1518
1519 template <class BinaryOp>
1520 double
1521 #ifdef PASO_MPI
1522 lazyAlgWorker(double init, MPI_Op mpiop_type);
1523 #else
1524 lazyAlgWorker(double init);
1525 #endif
1526
1527 double
1528 LsupWorker() const;
1529
1530 double
1531 supWorker() const;
1532
1533 double
1534 infWorker() const;
1535
1536 boost::python::object
1537 integrateWorker() const;
1538
1539 void
1540 calc_minGlobalDataPoint(int& ProcNo, int& DataPointNo) const;
1541
1542 void
1543 calc_maxGlobalDataPoint(int& ProcNo, int& DataPointNo) const;
1544
1545 // For internal use in Data.cpp only!
1546 // other uses should call the main entry points and allow laziness
1547 Data
1548 minval_nonlazy() const;
1549
1550 // For internal use in Data.cpp only!
1551 Data
1552 maxval_nonlazy() const;
1553
1554
1555 /**
1556 \brief
1557 Check *this and the right operand are compatible. Throws
1558 an exception if they aren't.
1559 \param right - Input - The right hand side.
1560 */
1561 inline
1562 void
1563 operandCheck(const Data& right) const
1564 {
1565 return m_data->operandCheck(*(right.m_data.get()));
1566 }
1567
1568 /**
1569 \brief
1570 Perform the specified reduction algorithm on every element of every data point in
1571 this Data object according to the given function and return the single value result.
1572 */
1573 template <class BinaryFunction>
1574 inline
1575 double
1576 algorithm(BinaryFunction operation,
1577 double initial_value) const;
1578
1579 /**
1580 \brief
1581 Reduce each data-point in this Data object using the given operation. Return a Data
1582 object with the same number of data-points, but with each data-point containing only
1583 one value - the result of the reduction operation on the corresponding data-point in
1584 this Data object
1585 */
1586 template <class BinaryFunction>
1587 inline
1588 Data
1589 dp_algorithm(BinaryFunction operation,
1590 double initial_value) const;
1591
1592 /**
1593 \brief
1594 Perform the given binary operation on all of the data's elements.
1595 The underlying type of the right hand side (right) determines the final
1596 type of *this after the operation. For example if the right hand side
1597 is expanded *this will be expanded if necessary.
1598 RHS is a Data object.
1599 */
1600 template <class BinaryFunction>
1601 inline
1602 void
1603 binaryOp(const Data& right,
1604 BinaryFunction operation);
1605
1606 /**
1607 \brief
1608 Convert the data type of the RHS to match this.
1609 \param right - Input - data type to match.
1610 */
1611 void
1612 typeMatchLeft(Data& right) const;
1613
1614 /**
1615 \brief
1616 Convert the data type of this to match the RHS.
1617 \param right - Input - data type to match.
1618 */
1619 void
1620 typeMatchRight(const Data& right);
1621
1622 /**
1623 \brief
1624 Construct a Data object of the appropriate type.
1625 */
1626
1627 void
1628 initialise(const DataTypes::ValueType& value,
1629 const DataTypes::ShapeType& shape,
1630 const FunctionSpace& what,
1631 bool expanded);
1632
1633 void
1634 initialise(const WrappedArray& value,
1635 const FunctionSpace& what,
1636 bool expanded);
1637
1638 //
1639 // flag to protect the data object against any update
1640 bool m_protected;
1641 mutable bool m_shared;
1642 bool m_lazy;
1643
1644 //
1645 // pointer to the actual data object
1646 // boost::shared_ptr<DataAbstract> m_data;
1647 DataAbstract_ptr m_data;
1648
1649 // If possible please use getReadyPtr instead.
1650 // But see warning below.
1651 const DataReady*
1652 getReady() const;
1653
1654 DataReady*
1655 getReady();
1656
1657
1658 // Be wary of using this for local operations since it (temporarily) increases reference count.
1659 // If you are just using this to call a method on DataReady instead of DataAbstract consider using
1660 // getReady() instead
1661 DataReady_ptr
1662 getReadyPtr();
1663
1664 const_DataReady_ptr
1665 getReadyPtr() const;
1666
1667
1668 /**
1669 \brief Update the Data's shared flag
1670 This indicates that the DataAbstract used by this object is now shared (or no longer shared).
1671 For internal use only.
1672 */
1673 void updateShareStatus(bool nowshared) const
1674 {
1675 m_shared=nowshared; // m_shared is mutable
1676 }
1677
1678 // In the isShared() method below:
1679 // A problem would occur if m_data (the address pointed to) were being modified
1680 // while the call m_data->is_shared is being executed.
1681 //
1682 // Q: So why do I think this code can be thread safe/correct?
1683 // A: We need to make some assumptions.
1684 // 1. We assume it is acceptable to return true under some conditions when we aren't shared.
1685 // 2. We assume that no constructions or assignments which will share previously unshared
1686 // will occur while this call is executing. This is consistent with the way Data:: and C are written.
1687 //
1688 // This means that the only transition we need to consider, is when a previously shared object is
1689 // not shared anymore. ie. the other objects have been destroyed or a deep copy has been made.
1690 // In those cases the m_shared flag changes to false after m_data has completed changing.
1691 // For any threads executing before the flag switches they will assume the object is still shared.
1692 bool isShared() const
1693 {
1694 return m_shared;
1695 /* if (m_shared) return true;
1696 if (m_data->isShared())
1697 {
1698 updateShareStatus(true);
1699 return true;
1700 }
1701 return false;*/
1702 }
1703
1704 void forceResolve()
1705 {
1706 if (isLazy())
1707 {
1708 #ifdef _OPENMP
1709 if (omp_in_parallel())
1710 { // Yes this is throwing an exception out of an omp thread which is forbidden.
1711 throw DataException("Please do not call forceResolve() in a parallel region.");
1712 }
1713 #endif
1714 resolve();
1715 }
1716 }
1717
1718 /**
1719 \brief if another object is sharing out member data make a copy to work with instead.
1720 This code should only be called from single threaded sections of code.
1721 */
1722 void exclusiveWrite()
1723 {
1724 #ifdef _OPENMP
1725 if (omp_in_parallel())
1726 {
1727 // *((int*)0)=17;
1728 throw DataException("Programming error. Please do not run exclusiveWrite() in multi-threaded sections.");
1729 }
1730 #endif
1731 forceResolve();
1732 if (isShared())
1733 {
1734 DataAbstract* t=m_data->deepCopy();
1735 set_m_data(DataAbstract_ptr(t));
1736 }
1737 }
1738
1739 /**
1740 \brief checks if caller can have exclusive write to the object
1741 */
1742 void checkExclusiveWrite()
1743 {
1744 if (isLazy() || isShared())
1745 {
1746 throw DataException("Programming error. ExclusiveWrite required - please call requireWrite()");
1747 }
1748 }
1749
1750 /**
1751 \brief Modify the data abstract hosted by this Data object
1752 For internal use only.
1753 Passing a pointer to null is permitted (do this in the destructor)
1754 \warning Only to be called in single threaded code or inside a single/critical section. This method needs to be atomic.
1755 */
1756 void set_m_data(DataAbstract_ptr p);
1757
1758 friend class DataAbstract; // To allow calls to updateShareStatus
1759
1760 };
1761
1762 } // end namespace escript
1763
1764
1765 // No, this is not supposed to be at the top of the file
1766 // DataAbstact needs to be declared first, then DataReady needs to be fully declared
1767 // so that I can dynamic cast between them below.
1768 #include "DataReady.h"
1769 #include "DataLazy.h"
1770
1771 namespace escript
1772 {
1773
1774 inline
1775 const DataReady*
1776 Data::getReady() const
1777 {
1778 const DataReady* dr=dynamic_cast<const DataReady*>(m_data.get());
1779 EsysAssert((dr!=0), "Error - casting to DataReady.");
1780 return dr;
1781 }
1782
1783 inline
1784 DataReady*
1785 Data::getReady()
1786 {
1787 DataReady* dr=dynamic_cast<DataReady*>(m_data.get());
1788 EsysAssert((dr!=0), "Error - casting to DataReady.");
1789 return dr;
1790 }
1791
1792 // Be wary of using this for local operations since it (temporarily) increases reference count.
1793 // If you are just using this to call a method on DataReady instead of DataAbstract consider using
1794 // getReady() instead
1795 inline
1796 DataReady_ptr
1797 Data::getReadyPtr()
1798 {
1799 DataReady_ptr dr=boost::dynamic_pointer_cast<DataReady>(m_data);
1800 EsysAssert((dr.get()!=0), "Error - casting to DataReady.");
1801 return dr;
1802 }
1803
1804
1805 inline
1806 const_DataReady_ptr
1807 Data::getReadyPtr() const
1808 {
1809 const_DataReady_ptr dr=boost::dynamic_pointer_cast<const DataReady>(m_data);
1810 EsysAssert((dr.get()!=0), "Error - casting to DataReady.");
1811 return dr;
1812 }
1813
1814 inline
1815 DataAbstract::ValueType::value_type*
1816 Data::getSampleDataRW(DataAbstract::ValueType::size_type sampleNo)
1817 {
1818 if (isLazy())
1819 {
1820 throw DataException("Error, attempt to acquire RW access to lazy data. Please call requireWrite() first.");
1821 }
1822 return getReady()->getSampleDataRW(sampleNo);
1823 }
1824
1825 inline
1826 const DataAbstract::ValueType::value_type*
1827 Data::getSampleDataRO(DataAbstract::ValueType::size_type sampleNo)
1828 {
1829 DataLazy* l=dynamic_cast<DataLazy*>(m_data.get());
1830 if (l!=0)
1831 {
1832 size_t offset=0;
1833 const DataTypes::ValueType* res=l->resolveSample(sampleNo,offset);
1834 return &((*res)[offset]);
1835 }
1836 return getReady()->getSampleDataRO(sampleNo);
1837 }
1838
1839
1840
1841 /**
1842 Modify a filename for MPI parallel output to multiple files
1843 */
1844 char *Escript_MPI_appendRankToFileName(const char *, int, int);
1845
1846 /**
1847 Binary Data object operators.
1848 */
1849 inline double rpow(double x,double y)
1850 {
1851 return pow(y,x);
1852 }
1853
1854 /**
1855 \brief
1856 Operator+
1857 Takes two Data objects.
1858 */
1859 ESCRIPT_DLL_API Data operator+(const Data& left, const Data& right);
1860
1861 /**
1862 \brief
1863 Operator-
1864 Takes two Data objects.
1865 */
1866 ESCRIPT_DLL_API Data operator-(const Data& left, const Data& right);
1867
1868 /**
1869 \brief
1870 Operator*
1871 Takes two Data objects.
1872 */
1873 ESCRIPT_DLL_API Data operator*(const Data& left, const Data& right);
1874
1875 /**
1876 \brief
1877 Operator/
1878 Takes two Data objects.
1879 */
1880 ESCRIPT_DLL_API Data operator/(const Data& left, const Data& right);
1881
1882 /**
1883 \brief
1884 Operator+
1885 Takes LHS Data object and RHS python::object.
1886 python::object must be convertable to Data type.
1887 */
1888 ESCRIPT_DLL_API Data operator+(const Data& left, const boost::python::object& right);
1889
1890 /**
1891 \brief
1892 Operator-
1893 Takes LHS Data object and RHS python::object.
1894 python::object must be convertable to Data type.
1895 */
1896 ESCRIPT_DLL_API Data operator-(const Data& left, const boost::python::object& right);
1897
1898 /**
1899 \brief
1900 Operator*
1901 Takes LHS Data object and RHS python::object.
1902 python::object must be convertable to Data type.
1903 */
1904 ESCRIPT_DLL_API Data operator*(const Data& left, const boost::python::object& right);
1905
1906 /**
1907 \brief
1908 Operator/
1909 Takes LHS Data object and RHS python::object.
1910 python::object must be convertable to Data type.
1911 */
1912 ESCRIPT_DLL_API Data operator/(const Data& left, const boost::python::object& right);
1913
1914 /**
1915 \brief
1916 Operator+
1917 Takes LHS python::object and RHS Data object.
1918 python::object must be convertable to Data type.
1919 */
1920 ESCRIPT_DLL_API Data operator+(const boost::python::object& left, const Data& right);
1921
1922 /**
1923 \brief
1924 Operator-
1925 Takes LHS python::object and RHS Data object.
1926 python::object must be convertable to Data type.
1927 */
1928 ESCRIPT_DLL_API Data operator-(const boost::python::object& left, const Data& right);
1929
1930 /**
1931 \brief
1932 Operator*
1933 Takes LHS python::object and RHS Data object.
1934 python::object must be convertable to Data type.
1935 */
1936 ESCRIPT_DLL_API Data operator*(const boost::python::object& left, const Data& right);
1937
1938 /**
1939 \brief
1940 Operator/
1941 Takes LHS python::object and RHS Data object.
1942 python::object must be convertable to Data type.
1943 */
1944 ESCRIPT_DLL_API Data operator/(const boost::python::object& left, const Data& right);
1945
1946
1947
1948 /**
1949 \brief
1950 Output operator
1951 */
1952 ESCRIPT_DLL_API std::ostream& operator<<(std::ostream& o, const Data& data);
1953
1954 /**
1955 \brief
1956 Compute a tensor product of two Data objects
1957 \param arg_0 - Input - Data object
1958 \param arg_1 - Input - Data object
1959 \param axis_offset - Input - axis offset
1960 \param transpose - Input - 0: transpose neither, 1: transpose arg0, 2: transpose arg1
1961 */
1962 ESCRIPT_DLL_API
1963 Data
1964 C_GeneralTensorProduct(Data& arg_0,
1965 Data& arg_1,
1966 int axis_offset=0,
1967 int transpose=0);
1968
1969 /**
1970 \brief
1971 Perform the given binary operation with this and right as operands.
1972 Right is a Data object.
1973 */
1974 template <class BinaryFunction>
1975 inline
1976 void
1977 Data::binaryOp(const Data& right,
1978 BinaryFunction operation)
1979 {
1980 //
1981 // if this has a rank of zero promote it to the rank of the RHS
1982 if (getDataPointRank()==0 && right.getDataPointRank()!=0) {
1983 throw DataException("Error - attempt to update rank zero object with object with rank bigger than zero.");
1984 }
1985
1986 if (isLazy() || right.isLazy())
1987 {
1988 throw DataException("Programmer error - attempt to call binaryOp with Lazy Data.");
1989 }
1990 //
1991 // initially make the temporary a shallow copy
1992 Data tempRight(right);
1993
1994 if (getFunctionSpace()!=right.getFunctionSpace()) {
1995 if (right.probeInterpolation(getFunctionSpace())) {
1996 //
1997 // an interpolation is required so create a new Data
1998 tempRight=Data(right,this->getFunctionSpace());
1999 } else if (probeInterpolation(right.getFunctionSpace())) {
2000 //
2001 // interpolate onto the RHS function space
2002 Data tempLeft(*this,right.getFunctionSpace());
2003 // m_data=tempLeft.m_data;
2004 set_m_data(tempLeft.m_data);
2005 }
2006 }
2007 operandCheck(tempRight);
2008 //
2009 // ensure this has the right type for the RHS
2010 typeMatchRight(tempRight);
2011 //
2012 // Need to cast to the concrete types so that the correct binaryOp
2013 // is called.
2014 if (isExpanded()) {
2015 //
2016 // Expanded data will be done in parallel, the right hand side can be
2017 // of any data type
2018 DataExpanded* leftC=dynamic_cast<DataExpanded*>(m_data.get());
2019 EsysAssert((leftC!=0), "Programming error - casting to DataExpanded.");
2020 escript::binaryOp(*leftC,*(tempRight.getReady()),operation);
2021 } else if (isTagged()) {
2022 //
2023 // Tagged data is operated on serially, the right hand side can be
2024 // either DataConstant or DataTagged
2025 DataTagged* leftC=dynamic_cast<DataTagged*>(m_data.get());
2026 EsysAssert((leftC!=0), "Programming error - casting to DataTagged.");
2027 if (right.isTagged()) {
2028 DataTagged* rightC=dynamic_cast<DataTagged*>(tempRight.m_data.get());
2029 EsysAssert((rightC!=0), "Programming error - casting to DataTagged.");
2030 escript::binaryOp(*leftC,*rightC,operation);
2031 } else {
2032 DataConstant* rightC=dynamic_cast<DataConstant*>(tempRight.m_data.get());
2033 EsysAssert((rightC!=0), "Programming error - casting to DataConstant.");
2034 escript::binaryOp(*leftC,*rightC,operation);
2035 }
2036 } else if (isConstant()) {
2037 DataConstant* leftC=dynamic_cast<DataConstant*>(m_data.get());
2038 DataConstant* rightC=dynamic_cast<DataConstant*>(tempRight.m_data.get());
2039 EsysAssert((leftC!=0 && rightC!=0), "Programming error - casting to DataConstant.");
2040 escript::binaryOp(*leftC,*rightC,operation);
2041 }
2042 }
2043
2044 /**
2045 \brief
2046 Perform the given Data object reduction algorithm on this and return the result.
2047 Given operation combines each element of each data point, thus argument
2048 object (*this) is a rank n Data object, and returned object is a scalar.
2049 Calls escript::algorithm.
2050 */
2051 template <class BinaryFunction>
2052 inline
2053 double
2054 Data::algorithm(BinaryFunction operation, double initial_value) const
2055 {
2056 if (isExpanded()) {
2057 DataExpanded* leftC=dynamic_cast<DataExpanded*>(m_data.get());
2058 EsysAssert((leftC!=0), "Programming error - casting to DataExpanded.");
2059 return escript::algorithm(*leftC,operation,initial_value);
2060 } else if (isTagged()) {
2061 DataTagged* leftC=dynamic_cast<DataTagged*>(m_data.get());
2062 EsysAssert((leftC!=0), "Programming error - casting to DataTagged.");
2063 return escript::algorithm(*leftC,operation,initial_value);
2064 } else if (isConstant()) {
2065 DataConstant* leftC=dynamic_cast<DataConstant*>(m_data.get());
2066 EsysAssert((leftC!=0), "Programming error - casting to DataConstant.");
2067 return escript::algorithm(*leftC,operation,initial_value);
2068 } else if (isEmpty()) {
2069 throw DataException("Error - Operations not permitted on instances of DataEmpty.");
2070 } else if (isLazy()) {
2071 throw DataException("Error - Operations not permitted on instances of DataLazy.");
2072 } else {
2073 throw DataException("Error - Data encapsulates an unknown type.");
2074 }
2075 }
2076
2077 /**
2078 \brief
2079 Perform the given data point reduction algorithm on data and return the result.
2080 Given operation combines each element within each data point into a scalar,
2081 thus argument object is a rank n Data object, and returned object is a
2082 rank 0 Data object.
2083 Calls escript::dp_algorithm.
2084 */
2085 template <class BinaryFunction>
2086 inline
2087 Data
2088 Data::dp_algorithm(BinaryFunction operation, double initial_value) const
2089 {
2090 if (isEmpty()) {
2091 throw DataException("Error - Operations not permitted on instances of DataEmpty.");
2092 }
2093 else if (isExpanded()) {
2094 Data result(0,DataTypes::ShapeType(),getFunctionSpace(),isExpanded());
2095 DataExpanded* dataE=dynamic_cast<DataExpanded*>(m_data.get());
2096 DataExpanded* resultE=dynamic_cast<DataExpanded*>(result.m_data.get());
2097 EsysAssert((dataE!=0), "Programming error - casting data to DataExpanded.");
2098 EsysAssert((resultE!=0), "Programming error - casting result to DataExpanded.");
2099 escript::dp_algorithm(*dataE,*resultE,operation,initial_value);
2100 return result;
2101 }
2102 else if (isTagged()) {
2103 DataTagged* dataT=dynamic_cast<DataTagged*>(m_data.get());
2104 EsysAssert((dataT!=0), "Programming error - casting data to DataTagged.");
2105 DataTypes::ValueType defval(1);
2106 defval[0]=0;
2107 DataTagged* resultT=new DataTagged(getFunctionSpace(), DataTypes::scalarShape, defval, dataT);
2108 escript::dp_algorithm(*dataT,*resultT,operation,initial_value);
2109 return Data(resultT); // note: the Data object now owns the resultT pointer
2110 }
2111 else if (isConstant()) {
2112 Data result(0,DataTypes::ShapeType(),getFunctionSpace(),isExpanded());
2113 DataConstant* dataC=dynamic_cast<DataConstant*>(m_data.get());
2114 DataConstant* resultC=dynamic_cast<DataConstant*>(result.m_data.get());
2115 EsysAssert((dataC!=0), "Programming error - casting data to DataConstant.");
2116 EsysAssert((resultC!=0), "Programming error - casting result to DataConstant.");
2117 escript::dp_algorithm(*dataC,*resultC,operation,initial_value);
2118 return result;
2119 } else if (isLazy()) {
2120 throw DataException("Error - Operations not permitted on instances of DataLazy.");
2121 } else {
2122 throw DataException("Error - Data encapsulates an unknown type.");
2123 }
2124 }
2125
2126 /**
2127 \brief
2128 Compute a tensor operation with two Data objects
2129 \param arg_0 - Input - Data object
2130 \param arg_1 - Input - Data object
2131 \param operation - Input - Binary op functor
2132 */
2133 template <typename BinaryFunction>
2134 inline
2135 Data
2136 C_TensorBinaryOperation(Data const &arg_0,
2137 Data const &arg_1,
2138 BinaryFunction operation)
2139 {
2140 if (arg_0.isEmpty() || arg_1.isEmpty())
2141 {
2142 throw DataException("Error - Operations not permitted on instances of DataEmpty.");
2143 }
2144 if (arg_0.isLazy() || arg_1.isLazy())
2145 {
2146 throw DataException("Error - Operations not permitted on lazy data.");
2147 }
2148 // Interpolate if necessary and find an appropriate function space
2149 Data arg_0_Z, arg_1_Z;
2150 if (arg_0.getFunctionSpace()!=arg_1.getFunctionSpace()) {
2151 if (arg_0.probeInterpolation(arg_1.getFunctionSpace())) {
2152 arg_0_Z = arg_0.interpolate(arg_1.getFunctionSpace());
2153 arg_1_Z = Data(arg_1);
2154 }
2155 else if (arg_1.probeInterpolation(arg_0.getFunctionSpace())) {
2156 arg_1_Z=arg_1.interpolate(arg_0.getFunctionSpace());
2157 arg_0_Z =Data(arg_0);
2158 }
2159 else {
2160 throw DataException("Error - C_TensorBinaryOperation: arguments have incompatible function spaces.");
2161 }
2162 } else {
2163 arg_0_Z = Data(arg_0);
2164 arg_1_Z = Data(arg_1);
2165 }
2166 // Get rank and shape of inputs
2167 int rank0 = arg_0_Z.getDataPointRank();
2168 int rank1 = arg_1_Z.getDataPointRank();
2169 DataTypes::ShapeType shape0 = arg_0_Z.getDataPointShape();
2170 DataTypes::ShapeType shape1 = arg_1_Z.getDataPointShape();
2171 int size0 = arg_0_Z.getDataPointSize();
2172 int size1 = arg_1_Z.getDataPointSize();
2173 // Declare output Data object
2174 Data res;
2175
2176 if (shape0 == shape1) {
2177 if (arg_0_Z.isConstant() && arg_1_Z.isConstant()) {
2178 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace()); // DataConstant output
2179 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(0));
2180 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(0));
2181 double *ptr_2 = &(res.getDataAtOffsetRW(0));
2182
2183 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2184 }
2185 else if (arg_0_Z.isConstant() && arg_1_Z.isTagged()) {
2186
2187 // Prepare the DataConstant input
2188 DataConstant* tmp_0=dynamic_cast<DataConstant*>(arg_0_Z.borrowData());
2189
2190 // Borrow DataTagged input from Data object
2191 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2192
2193 // Prepare a DataTagged output 2
2194 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace()); // DataTagged output
2195 res.tag();
2196 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2197
2198 // Prepare offset into DataConstant
2199 int offset_0 = tmp_0->getPointOffset(0,0);
2200 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2201
2202 // Get the pointers to the actual data
2203 const double *ptr_1 = &(tmp_1->getDefaultValueRO(0));
2204 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2205
2206 // Compute a result for the default
2207 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2208 // Compute a result for each tag
2209 const DataTagged::DataMapType& lookup_1=tmp_1->getTagLookup();
2210 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2211 for (i=lookup_1.begin();i!=lookup_1.end();i++) {
2212 tmp_2->addTag(i->first);
2213 const double *ptr_1 = &(tmp_1->getDataByTagRO(i->first,0));
2214 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2215
2216 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2217 }
2218
2219 }
2220 else if (arg_0_Z.isConstant() && arg_1_Z.isExpanded()) {
2221 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2222 DataConstant* tmp_0=dynamic_cast<DataConstant*>(arg_0_Z.borrowData());
2223 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2224 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2225
2226 int sampleNo_1,dataPointNo_1;
2227 int numSamples_1 = arg_1_Z.getNumSamples();
2228 int numDataPointsPerSample_1 = arg_1_Z.getNumDataPointsPerSample();
2229 int offset_0 = tmp_0->getPointOffset(0,0);
2230 res.requireWrite();
2231 #pragma omp parallel for private(sampleNo_1,dataPointNo_1) schedule(static)
2232 for (sampleNo_1 = 0; sampleNo_1 < numSamples_1; sampleNo_1++) {
2233 for (dataPointNo_1 = 0; dataPointNo_1 < numDataPointsPerSample_1; dataPointNo_1++) {
2234 int offset_1 = tmp_1->getPointOffset(sampleNo_1,dataPointNo_1);
2235 int offset_2 = tmp_2->getPointOffset(sampleNo_1,dataPointNo_1);
2236 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2237 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2238 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2239 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2240 }
2241 }
2242
2243 }
2244 else if (arg_0_Z.isTagged() && arg_1_Z.isConstant()) {
2245 // Borrow DataTagged input from Data object
2246 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2247
2248 // Prepare the DataConstant input
2249 DataConstant* tmp_1=dynamic_cast<DataConstant*>(arg_1_Z.borrowData());
2250
2251 // Prepare a DataTagged output 2
2252 res = Data(0.0, shape0, arg_0_Z.getFunctionSpace()); // DataTagged output
2253 res.tag();
2254 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2255
2256 // Prepare offset into DataConstant
2257 int offset_1 = tmp_1->getPointOffset(0,0);
2258
2259 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2260 // Get the pointers to the actual data
2261 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2262 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2263 // Compute a result for the default
2264 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2265 // Compute a result for each tag
2266 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2267 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2268 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2269 tmp_2->addTag(i->first);
2270 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2271 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2272 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2273 }
2274
2275 }
2276 else if (arg_0_Z.isTagged() && arg_1_Z.isTagged()) {
2277 // Borrow DataTagged input from Data object
2278 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2279
2280 // Borrow DataTagged input from Data object
2281 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2282
2283 // Prepare a DataTagged output 2
2284 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace());
2285 res.tag(); // DataTagged output
2286 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2287
2288 // Get the pointers to the actual data
2289 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2290 const double *ptr_1 = &(tmp_1->getDefaultValueRO(0));
2291 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2292
2293 // Compute a result for the default
2294 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2295 // Merge the tags
2296 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2297 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2298 const DataTagged::DataMapType& lookup_1=tmp_1->getTagLookup();
2299 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2300 tmp_2->addTag(i->first); // use tmp_2 to get correct shape
2301 }
2302 for (i=lookup_1.begin();i!=lookup_1.end();i++) {
2303 tmp_2->addTag(i->first);
2304 }
2305 // Compute a result for each tag
2306 const DataTagged::DataMapType& lookup_2=tmp_2->getTagLookup();
2307 for (i=lookup_2.begin();i!=lookup_2.end();i++) {
2308
2309 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2310 const double *ptr_1 = &(tmp_1->getDataByTagRO(i->first,0));
2311 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2312
2313 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2314 }
2315
2316 }
2317 else if (arg_0_Z.isTagged() && arg_1_Z.isExpanded()) {
2318 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2319 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2320 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2321 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2322 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2323
2324 int sampleNo_0,dataPointNo_0;
2325 int numSamples_0 = arg_0_Z.getNumSamples();
2326 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2327 res.requireWrite();
2328 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2329 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2330 int offset_0 = tmp_0->getPointOffset(sampleNo_0,0); // They're all the same, so just use #0
2331 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2332 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2333 int offset_1 = tmp_1->getPointOffset(sampleNo_0,dataPointNo_0);
2334 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2335 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2336 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2337 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2338 }
2339 }
2340
2341 }
2342 else if (arg_0_Z.isExpanded() && arg_1_Z.isConstant()) {
2343 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2344 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2345 DataConstant* tmp_1=dynamic_cast<DataConstant*>(arg_1_Z.borrowData());
2346 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2347
2348 int sampleNo_0,dataPointNo_0;
2349 int numSamples_0 = arg_0_Z.getNumSamples();
2350 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2351 int offset_1 = tmp_1->getPointOffset(0,0);
2352 res.requireWrite();
2353 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2354 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2355 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2356 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2357 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2358
2359 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2360 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2361 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2362
2363
2364 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2365 }
2366 }
2367
2368 }
2369 else if (arg_0_Z.isExpanded() && arg_1_Z.isTagged()) {
2370 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2371 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2372 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2373 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2374 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2375
2376 int sampleNo_0,dataPointNo_0;
2377 int numSamples_0 = arg_0_Z.getNumSamples();
2378 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2379 res.requireWrite();
2380 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2381 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2382 int offset_1 = tmp_1->getPointOffset(sampleNo_0,0);
2383 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2384 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2385 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2386 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2387 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2388 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2389 tensor_binary_operation(size0, ptr_0, ptr_1, ptr_2, operation);
2390 }
2391 }
2392
2393 }
2394 else if (arg_0_Z.isExpanded() && arg_1_Z.isExpanded()) {
2395 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2396 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2397 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2398 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2399 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2400
2401 int sampleNo_0,dataPointNo_0;
2402 int numSamples_0 = arg_0_Z.getNumSamples();
2403 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2404 res.requireWrite();
2405 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2406 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2407 dataPointNo_0=0;
2408 // for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2409 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2410 int offset_1 = tmp_1->getPointOffset(sampleNo_0,dataPointNo_0);
2411 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2412 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2413 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2414 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2415 tensor_binary_operation(size0*numDataPointsPerSample_0, ptr_0, ptr_1, ptr_2, operation);
2416 // }
2417 }
2418
2419 }
2420 else {
2421 throw DataException("Error - C_TensorBinaryOperation: unknown combination of inputs");
2422 }
2423
2424 } else if (0 == rank0) {
2425 if (arg_0_Z.isConstant() && arg_1_Z.isConstant()) {
2426 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace()); // DataConstant output
2427 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(0));
2428 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(0));
2429 double *ptr_2 = &(res.getDataAtOffsetRW(0));
2430 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2431 }
2432 else if (arg_0_Z.isConstant() && arg_1_Z.isTagged()) {
2433
2434 // Prepare the DataConstant input
2435 DataConstant* tmp_0=dynamic_cast<DataConstant*>(arg_0_Z.borrowData());
2436
2437 // Borrow DataTagged input from Data object
2438 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2439
2440 // Prepare a DataTagged output 2
2441 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace()); // DataTagged output
2442 res.tag();
2443 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2444
2445 // Prepare offset into DataConstant
2446 int offset_0 = tmp_0->getPointOffset(0,0);
2447 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2448
2449 const double *ptr_1 = &(tmp_1->getDefaultValueRO(0));
2450 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2451
2452 // Compute a result for the default
2453 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2454 // Compute a result for each tag
2455 const DataTagged::DataMapType& lookup_1=tmp_1->getTagLookup();
2456 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2457 for (i=lookup_1.begin();i!=lookup_1.end();i++) {
2458 tmp_2->addTag(i->first);
2459 const double *ptr_1 = &(tmp_1->getDataByTagRO(i->first,0));
2460 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2461 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2462 }
2463
2464 }
2465 else if (arg_0_Z.isConstant() && arg_1_Z.isExpanded()) {
2466
2467 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2468 DataConstant* tmp_0=dynamic_cast<DataConstant*>(arg_0_Z.borrowData());
2469 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2470 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2471
2472 int sampleNo_1,dataPointNo_1;
2473 int numSamples_1 = arg_1_Z.getNumSamples();
2474 int numDataPointsPerSample_1 = arg_1_Z.getNumDataPointsPerSample();
2475 int offset_0 = tmp_0->getPointOffset(0,0);
2476 const double *ptr_src = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2477 double ptr_0 = ptr_src[0];
2478 int size = size1*numDataPointsPerSample_1;
2479 res.requireWrite();
2480 #pragma omp parallel for private(sampleNo_1,dataPointNo_1) schedule(static)
2481 for (sampleNo_1 = 0; sampleNo_1 < numSamples_1; sampleNo_1++) {
2482 // for (dataPointNo_1 = 0; dataPointNo_1 < numDataPointsPerSample_1; dataPointNo_1++) {
2483 int offset_1 = tmp_1->getPointOffset(sampleNo_1,0);
2484 int offset_2 = tmp_2->getPointOffset(sampleNo_1,0);
2485 // const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2486 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2487 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2488 tensor_binary_operation(size, ptr_0, ptr_1, ptr_2, operation);
2489
2490 // }
2491 }
2492
2493 }
2494 else if (arg_0_Z.isTagged() && arg_1_Z.isConstant()) {
2495
2496 // Borrow DataTagged input from Data object
2497 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2498
2499 // Prepare the DataConstant input
2500 DataConstant* tmp_1=dynamic_cast<DataConstant*>(arg_1_Z.borrowData());
2501
2502 // Prepare a DataTagged output 2
2503 res = Data(0.0, shape1, arg_0_Z.getFunctionSpace()); // DataTagged output
2504 res.tag();
2505 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2506
2507 // Prepare offset into DataConstant
2508 int offset_1 = tmp_1->getPointOffset(0,0);
2509 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2510
2511 // Get the pointers to the actual data
2512 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2513 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2514
2515
2516 // Compute a result for the default
2517 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2518 // Compute a result for each tag
2519 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2520 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2521 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2522 tmp_2->addTag(i->first);
2523 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2524 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2525
2526 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2527 }
2528
2529 }
2530 else if (arg_0_Z.isTagged() && arg_1_Z.isTagged()) {
2531
2532 // Borrow DataTagged input from Data object
2533 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2534
2535 // Borrow DataTagged input from Data object
2536 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2537
2538 // Prepare a DataTagged output 2
2539 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace());
2540 res.tag(); // DataTagged output
2541 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2542
2543 // Get the pointers to the actual data
2544 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2545 const double *ptr_1 = &(tmp_1->getDefaultValueRO(0));
2546 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2547
2548 // Compute a result for the default
2549 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2550 // Merge the tags
2551 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2552 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2553 const DataTagged::DataMapType& lookup_1=tmp_1->getTagLookup();
2554 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2555 tmp_2->addTag(i->first); // use tmp_2 to get correct shape
2556 }
2557 for (i=lookup_1.begin();i!=lookup_1.end();i++) {
2558 tmp_2->addTag(i->first);
2559 }
2560 // Compute a result for each tag
2561 const DataTagged::DataMapType& lookup_2=tmp_2->getTagLookup();
2562 for (i=lookup_2.begin();i!=lookup_2.end();i++) {
2563 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2564 const double *ptr_1 = &(tmp_1->getDataByTagRO(i->first,0));
2565 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2566
2567 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2568 }
2569
2570 }
2571 else if (arg_0_Z.isTagged() && arg_1_Z.isExpanded()) {
2572
2573 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2574 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2575 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2576 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2577 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2578
2579 int sampleNo_0,dataPointNo_0;
2580 int numSamples_0 = arg_0_Z.getNumSamples();
2581 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2582 res.requireWrite();
2583 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2584 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2585 int offset_0 = tmp_0->getPointOffset(sampleNo_0,0); // They're all the same, so just use #0
2586 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2587 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2588 int offset_1 = tmp_1->getPointOffset(sampleNo_0,dataPointNo_0);
2589 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2590 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2591 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2592 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2593 }
2594 }
2595
2596 }
2597 else if (arg_0_Z.isExpanded() && arg_1_Z.isConstant()) {
2598 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2599 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2600 DataConstant* tmp_1=dynamic_cast<DataConstant*>(arg_1_Z.borrowData());
2601 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2602
2603 int sampleNo_0,dataPointNo_0;
2604 int numSamples_0 = arg_0_Z.getNumSamples();
2605 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2606 int offset_1 = tmp_1->getPointOffset(0,0);
2607 res.requireWrite();
2608 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2609 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2610 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2611 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2612 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2613 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2614 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2615 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2616 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2617 }
2618 }
2619
2620
2621 }
2622 else if (arg_0_Z.isExpanded() && arg_1_Z.isTagged()) {
2623
2624 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2625 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2626 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2627 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2628 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2629
2630 int sampleNo_0,dataPointNo_0;
2631 int numSamples_0 = arg_0_Z.getNumSamples();
2632 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2633 res.requireWrite();
2634 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2635 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2636 int offset_1 = tmp_1->getPointOffset(sampleNo_0,0);
2637 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2638 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2639 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2640 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2641 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2642 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2643 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2644 }
2645 }
2646
2647 }
2648 else if (arg_0_Z.isExpanded() && arg_1_Z.isExpanded()) {
2649
2650 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2651 res = Data(0.0, shape1, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2652 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2653 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2654 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2655
2656 int sampleNo_0,dataPointNo_0;
2657 int numSamples_0 = arg_0_Z.getNumSamples();
2658 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2659 res.requireWrite();
2660 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2661 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2662 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2663 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2664 int offset_1 = tmp_1->getPointOffset(sampleNo_0,dataPointNo_0);
2665 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2666 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2667 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2668 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2669 tensor_binary_operation(size1, ptr_0[0], ptr_1, ptr_2, operation);
2670 }
2671 }
2672
2673 }
2674 else {
2675 throw DataException("Error - C_TensorBinaryOperation: unknown combination of inputs");
2676 }
2677
2678 } else if (0 == rank1) {
2679 if (arg_0_Z.isConstant() && arg_1_Z.isConstant()) {
2680 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace()); // DataConstant output
2681 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(0));
2682 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(0));
2683 double *ptr_2 = &(res.getDataAtOffsetRW(0));
2684 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2685 }
2686 else if (arg_0_Z.isConstant() && arg_1_Z.isTagged()) {
2687
2688 // Prepare the DataConstant input
2689 DataConstant* tmp_0=dynamic_cast<DataConstant*>(arg_0_Z.borrowData());
2690
2691 // Borrow DataTagged input from Data object
2692 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2693
2694 // Prepare a DataTagged output 2
2695 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace()); // DataTagged output
2696 res.tag();
2697 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2698
2699 // Prepare offset into DataConstant
2700 int offset_0 = tmp_0->getPointOffset(0,0);
2701 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2702
2703 //Get the pointers to the actual data
2704 const double *ptr_1 = &(tmp_1->getDefaultValueRO(0));
2705 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2706
2707 // Compute a result for the default
2708 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2709 // Compute a result for each tag
2710 const DataTagged::DataMapType& lookup_1=tmp_1->getTagLookup();
2711 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2712 for (i=lookup_1.begin();i!=lookup_1.end();i++) {
2713 tmp_2->addTag(i->first);
2714 const double *ptr_1 = &(tmp_1->getDataByTagRO(i->first,0));
2715 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2716 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2717 }
2718 }
2719 else if (arg_0_Z.isConstant() && arg_1_Z.isExpanded()) {
2720
2721 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2722 DataConstant* tmp_0=dynamic_cast<DataConstant*>(arg_0_Z.borrowData());
2723 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2724 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2725
2726 int sampleNo_1,dataPointNo_1;
2727 int numSamples_1 = arg_1_Z.getNumSamples();
2728 int numDataPointsPerSample_1 = arg_1_Z.getNumDataPointsPerSample();
2729 int offset_0 = tmp_0->getPointOffset(0,0);
2730 res.requireWrite();
2731 #pragma omp parallel for private(sampleNo_1,dataPointNo_1) schedule(static)
2732 for (sampleNo_1 = 0; sampleNo_1 < numSamples_1; sampleNo_1++) {
2733 for (dataPointNo_1 = 0; dataPointNo_1 < numDataPointsPerSample_1; dataPointNo_1++) {
2734 int offset_1 = tmp_1->getPointOffset(sampleNo_1,dataPointNo_1);
2735 int offset_2 = tmp_2->getPointOffset(sampleNo_1,dataPointNo_1);
2736 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2737 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2738 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2739 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2740 }
2741 }
2742
2743 }
2744 else if (arg_0_Z.isTagged() && arg_1_Z.isConstant()) {
2745
2746 // Borrow DataTagged input from Data object
2747 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2748
2749 // Prepare the DataConstant input
2750 DataConstant* tmp_1=dynamic_cast<DataConstant*>(arg_1_Z.borrowData());
2751
2752 // Prepare a DataTagged output 2
2753 res = Data(0.0, shape0, arg_0_Z.getFunctionSpace()); // DataTagged output
2754 res.tag();
2755 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2756
2757 // Prepare offset into DataConstant
2758 int offset_1 = tmp_1->getPointOffset(0,0);
2759 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2760 // Get the pointers to the actual data
2761 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2762 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2763 // Compute a result for the default
2764 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2765 // Compute a result for each tag
2766 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2767 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2768 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2769 tmp_2->addTag(i->first);
2770 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2771 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2772 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2773 }
2774
2775 }
2776 else if (arg_0_Z.isTagged() && arg_1_Z.isTagged()) {
2777
2778 // Borrow DataTagged input from Data object
2779 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2780
2781 // Borrow DataTagged input from Data object
2782 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2783
2784 // Prepare a DataTagged output 2
2785 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace());
2786 res.tag(); // DataTagged output
2787 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2788
2789 // Get the pointers to the actual data
2790 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2791 const double *ptr_1 = &(tmp_1->getDefaultValueRO(0));
2792 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2793
2794 // Compute a result for the default
2795 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2796 // Merge the tags
2797 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2798 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2799 const DataTagged::DataMapType& lookup_1=tmp_1->getTagLookup();
2800 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2801 tmp_2->addTag(i->first); // use tmp_2 to get correct shape
2802 }
2803 for (i=lookup_1.begin();i!=lookup_1.end();i++) {
2804 tmp_2->addTag(i->first);
2805 }
2806 // Compute a result for each tag
2807 const DataTagged::DataMapType& lookup_2=tmp_2->getTagLookup();
2808 for (i=lookup_2.begin();i!=lookup_2.end();i++) {
2809 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2810 const double *ptr_1 = &(tmp_1->getDataByTagRO(i->first,0));
2811 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2812 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2813 }
2814
2815 }
2816 else if (arg_0_Z.isTagged() && arg_1_Z.isExpanded()) {
2817
2818 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2819 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2820 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2821 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2822 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2823
2824 int sampleNo_0,dataPointNo_0;
2825 int numSamples_0 = arg_0_Z.getNumSamples();
2826 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2827 res.requireWrite();
2828 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2829 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2830 int offset_0 = tmp_0->getPointOffset(sampleNo_0,0); // They're all the same, so just use #0
2831 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2832 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2833 int offset_1 = tmp_1->getPointOffset(sampleNo_0,dataPointNo_0);
2834 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2835 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2836 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2837 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2838 }
2839 }
2840
2841 }
2842 else if (arg_0_Z.isExpanded() && arg_1_Z.isConstant()) {
2843 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2844 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2845 DataConstant* tmp_1=dynamic_cast<DataConstant*>(arg_1_Z.borrowData());
2846 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2847
2848 int sampleNo_0,dataPointNo_0;
2849 int numSamples_0 = arg_0_Z.getNumSamples();
2850 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2851 int offset_1 = tmp_1->getPointOffset(0,0);
2852 const double *ptr_src = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2853 double ptr_1 = ptr_src[0];
2854 int size = size0 * numDataPointsPerSample_0;
2855 res.requireWrite();
2856 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2857 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2858 // for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2859 int offset_0 = tmp_0->getPointOffset(sampleNo_0,0);
2860 int offset_2 = tmp_2->getPointOffset(sampleNo_0,0);
2861 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2862 // const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2863 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2864 tensor_binary_operation(size, ptr_0, ptr_1, ptr_2, operation);
2865 // }
2866 }
2867
2868
2869 }
2870 else if (arg_0_Z.isExpanded() && arg_1_Z.isTagged()) {
2871
2872 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2873 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2874 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2875 DataTagged* tmp_1=dynamic_cast<DataTagged*>(arg_1_Z.borrowData());
2876 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2877
2878 int sampleNo_0,dataPointNo_0;
2879 int numSamples_0 = arg_0_Z.getNumSamples();
2880 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2881 res.requireWrite();
2882 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2883 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2884 int offset_1 = tmp_1->getPointOffset(sampleNo_0,0);
2885 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2886 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2887 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2888 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2889 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2890 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2891 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2892 }
2893 }
2894
2895 }
2896 else if (arg_0_Z.isExpanded() && arg_1_Z.isExpanded()) {
2897
2898 // After finding a common function space above the two inputs have the same numSamples and num DPPS
2899 res = Data(0.0, shape0, arg_1_Z.getFunctionSpace(),true); // DataExpanded output
2900 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2901 DataExpanded* tmp_1=dynamic_cast<DataExpanded*>(arg_1_Z.borrowData());
2902 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2903
2904 int sampleNo_0,dataPointNo_0;
2905 int numSamples_0 = arg_0_Z.getNumSamples();
2906 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2907 res.requireWrite();
2908 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2909 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2910 for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
2911 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
2912 int offset_1 = tmp_1->getPointOffset(sampleNo_0,dataPointNo_0);
2913 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
2914 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
2915 const double *ptr_1 = &(arg_1_Z.getDataAtOffsetRO(offset_1));
2916 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
2917 tensor_binary_operation(size0, ptr_0, ptr_1[0], ptr_2, operation);
2918 }
2919 }
2920
2921 }
2922 else {
2923 throw DataException("Error - C_TensorBinaryOperation: unknown combination of inputs");
2924 }
2925
2926 } else {
2927 throw DataException("Error - C_TensorBinaryOperation: arguments have incompatible shapes");
2928 }
2929
2930 return res;
2931 }
2932
2933 template <typename UnaryFunction>
2934 Data
2935 C_TensorUnaryOperation(Data const &arg_0,
2936 UnaryFunction operation)
2937 {
2938 if (arg_0.isEmpty()) // do this before we attempt to interpolate
2939 {
2940 throw DataException("Error - Operations not permitted on instances of DataEmpty.");
2941 }
2942 if (arg_0.isLazy())
2943 {
2944 throw DataException("Error - Operations not permitted on lazy data.");
2945 }
2946 // Interpolate if necessary and find an appropriate function space
2947 Data arg_0_Z = Data(arg_0);
2948
2949 // Get rank and shape of inputs
2950 const DataTypes::ShapeType& shape0 = arg_0_Z.getDataPointShape();
2951 int size0 = arg_0_Z.getDataPointSize();
2952
2953 // Declare output Data object
2954 Data res;
2955
2956 if (arg_0_Z.isConstant()) {
2957 res = Data(0.0, shape0, arg_0_Z.getFunctionSpace()); // DataConstant output
2958 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(0));
2959 double *ptr_2 = &(res.getDataAtOffsetRW(0));
2960 tensor_unary_operation(size0, ptr_0, ptr_2, operation);
2961 }
2962 else if (arg_0_Z.isTagged()) {
2963
2964 // Borrow DataTagged input from Data object
2965 DataTagged* tmp_0=dynamic_cast<DataTagged*>(arg_0_Z.borrowData());
2966
2967 // Prepare a DataTagged output 2
2968 res = Data(0.0, shape0, arg_0_Z.getFunctionSpace()); // DataTagged output
2969 res.tag();
2970 DataTagged* tmp_2=dynamic_cast<DataTagged*>(res.borrowData());
2971
2972 // Get the pointers to the actual data
2973 const double *ptr_0 = &(tmp_0->getDefaultValueRO(0));
2974 double *ptr_2 = &(tmp_2->getDefaultValueRW(0));
2975 // Compute a result for the default
2976 tensor_unary_operation(size0, ptr_0, ptr_2, operation);
2977 // Compute a result for each tag
2978 const DataTagged::DataMapType& lookup_0=tmp_0->getTagLookup();
2979 DataTagged::DataMapType::const_iterator i; // i->first is a tag, i->second is an offset into memory
2980 for (i=lookup_0.begin();i!=lookup_0.end();i++) {
2981 tmp_2->addTag(i->first);
2982 const double *ptr_0 = &(tmp_0->getDataByTagRO(i->first,0));
2983 double *ptr_2 = &(tmp_2->getDataByTagRW(i->first,0));
2984 tensor_unary_operation(size0, ptr_0, ptr_2, operation);
2985 }
2986
2987 }
2988 else if (arg_0_Z.isExpanded()) {
2989
2990 res = Data(0.0, shape0, arg_0_Z.getFunctionSpace(),true); // DataExpanded output
2991 DataExpanded* tmp_0=dynamic_cast<DataExpanded*>(arg_0_Z.borrowData());
2992 DataExpanded* tmp_2=dynamic_cast<DataExpanded*>(res.borrowData());
2993
2994 int sampleNo_0,dataPointNo_0;
2995 int numSamples_0 = arg_0_Z.getNumSamples();
2996 int numDataPointsPerSample_0 = arg_0_Z.getNumDataPointsPerSample();
2997 #pragma omp parallel for private(sampleNo_0,dataPointNo_0) schedule(static)
2998 for (sampleNo_0 = 0; sampleNo_0 < numSamples_0; sampleNo_0++) {
2999 dataPointNo_0=0;
3000 // for (dataPointNo_0 = 0; dataPointNo_0 < numDataPointsPerSample_0; dataPointNo_0++) {
3001 int offset_0 = tmp_0->getPointOffset(sampleNo_0,dataPointNo_0);
3002 int offset_2 = tmp_2->getPointOffset(sampleNo_0,dataPointNo_0);
3003 const double *ptr_0 = &(arg_0_Z.getDataAtOffsetRO(offset_0));
3004 double *ptr_2 = &(res.getDataAtOffsetRW(offset_2));
3005 tensor_unary_operation(size0*numDataPointsPerSample_0, ptr_0, ptr_2, operation);
3006 // }
3007 }
3008 }
3009 else {
3010 throw DataException("Error - C_TensorUnaryOperation: unknown combination of inputs");
3011 }
3012
3013 return res;
3014 }
3015
3016 }
3017 #endif
Properties
Name Value
svn:eol-style native
svn:keywords Author Date Id Revision
ViewVC Help
Powered by ViewVC 1.1.26 | __label__pos | 0.99261 |
Calculators & Converters
Binary 1110 to Octal Conversion
What is 1110 binary in octal? - converter, chart & solved example problem with step by step workout for how to carry out binary 1110 to octal conversion manually. The base-2 value of 11102 is equal to base-10 value of 1610.
In different representation
11102 = 168
0b1110 = 0o16
BinaryOctalDecimal
1100.114.412.5
11011513
1101.115.413.5
11101614
1110.116.414.5
11111715
Work to Find What is 1110 Binary in Octal
The below is the example problem with step by step work to find what is 1110 binary in octal.
1110 Binary to Octal Conversion:
step 1 Split the given binary number 11102 into groups of three bits from the right to left.
001110
step 2 Find the octal equivalent for each group and write it down in the same order
001
1
110
6
11102 = 16 8
getcalc.com Calculators | __label__pos | 0.932469 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.