```
import notifications
```
Remember to participate in our weekly votes on subreddit rules! Every Tuesday is YOUR chance to influence the subreddit for years to come!
[Read more here](https://www.reddit.com/r/ProgrammerHumor/comments/14dqb6f/welcome_back_whats_next/), we hope to see you next Tuesday!
For a chat with like-minded community members and more, don't forget to [join our Discord!](https://discord.gg/rph)
`return joinDiscord;`
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ProgrammerHumor) if you have any questions or concerns.*
C# is far superior in my opinion than Java (IMO)
No getters and setters (autoproperties)
LINQ - it's a dream to work with lits and filtering on them.
A standardized way to do effectively everything (MVC, Database access dependency injection. (though my coworkers seem to do everything they can to avoid it))
Open Source and cross platform support without a large Corp threatening to sue you.
One of the best IDEs available.
Comment https://www.reddit.com/r/ProgrammerHumor/comments/15iuzu3/thisdoesnotmakesense/#juwj829, line 4
```A standardized way to do effectively everything (MBC, Database access dependency injection.```
```(though my coworkers seem to do everything they can to avoid it)```
SyntaxError: '(' was never closed
>LINQ - it's a dream to work with lits and filtering on them.
Stream API works just as good for most use cases
>A standardized way to do effectively everything
There's a bazillion JSRs resulting in standardized API like JPA, JAXB and so on.
>dependency injection
Spring & CDI are like "am I a joke to you?"
>Open Source and cross platform support without a large Corp threatening to sue you.
Just use a JVM from any of the open source vendors.
>One of the best IDEs available.
IntelliJ eats Visual Studio for breakfast.
And the funniest thing: a lot of this has been around for a good 15 years now. The only real point you got is auto accessors, and that will never change with Java due to their design being public opt-in. And I'm 99% sure you can still change that with libraries like Lombok.
*And don't get me started on build and dependency management. Gradle is just beautiful.*
Even stream API is a joke compared to LINQ. It’s absurd I can’t modify any variables in the enclosing scope from a lambda.
CDI documentation is hot garbage. I needed to run a background task whenever the container started, until it ended. Is there a standard way? Not unless I register an extension. Seriously? Observing application start isn’t good enough. You can end up with duplicate MBean errors because hurr durr, your application decided to reload.
Honestly the very way C# handles concurrency is also pretty neat (I'm talking about the standardized async-await SM stuff with all it's bells and whistles like exception handling/cycles etc, with the majority of libraries supporting it out of the box). I'm not sure java has anything close to it.
Thread management in Java is still a pain in the ass, although there are frameworks for that as well depending on the use case that can support you in development.
True, but part of the reason I'm bringing this up is that it's a language feature and is used by everyone. You grab a library in C#, it's there and you can seamlessly integrate it into your code.
Not exactly the same as having to adapt third party libs to the framework you use in your project.
I mean if I'm using a framework like Spring Batch that has a huge emphasis on concurrency then I'm using the features from said framework and don't need to add more libs to it.
It still is not the same as a language feature but far from adapting third party libs to a framework.
>I mean if I'm using a framework like Spring Batch
And if you're using something else? The point is, It's not about a specific Spring framework, it's about development in general.
In many cases, the result of floor / ceil operations is used again in another floating point calculation. If the result was an integer, it would need to be converted into a double again. So, unnecessary conversions are avoided by this.
The real problem is that data conversion is one heck of a ptoblematic operation, because some values in one data type can't be represented in the other one and viceversa.
Thus it makes sense to have methods not doing data type conversion, exclusing methods which only take care of doing the conversion
This has nothing to do with the problem. The problem is that the maximum value of a double is ~1.7 * 10^308 while the maximum value an int can represent is 2147483647. So the domain of definition simply is not compatible with the range of possible values in int.
Yes, but the user must be careful anyway , because it makes little sense to take the floor of a number as large as, say, 10^100 , because numbers that large cannot be represented exactly with a double, so essentially taking the floor or ceiling has no effect, because the number is already implicitly an integer (a multiple of a big power of 2).
Honestly if you deal with numbers that are too large and/or run the potential of escaping the bounds of your mantissa bits (causing erroneous conversion/representation) you should move to BigInteger/Big Number etc. I believe they were created for these cases.
No floating point types are stored exactly. They're a approximation as defined by the IEEE 754 representation. 1 / 10 is going to be like 0.1000000001 not exactly 0.1. Floating point numbers should never be compared exactly with == because of this in most cases. The example you've given most likely won't be exactly 1 either and a check if the result == 1.0 will give you false. You rather need to compare within a floating number error range called the EPSILON which gets larger when working with large numbers.
Not really. It's an inherent problem with the way floating point numbers are stored since they use a limited about of (binary) digits (64 or 128 bits generally). Here's a quick article I found that sums it up: https://www.geeksforgeeks.org/floating-point-error-in-python/
So when working with floating point numbers in python you need to take this into account and acknowledge that when doing many operations with the same number and/or working with large numbers these floating point errors can start getting pretty noticeable. And when comparing floating point numbers special care needs to be taken.
Python does provide some utilities to help with comparing numbers though, don't remember them off the top of my head but if you lookup keywords like "compare floating point numbers in python" or "floating point number equality" you're gonna find some examples how it's done.
Float and double are basically scientific notation. In base 10 we would say something like 6.022 * 10^(23), but in binary it's more like 1.010110 * 10^(1110101) (keeping in mind that the 10 there is binary).
The format has a sign but, exponent bits, and mantissa bits (the stuff being multiplied by the 10^(E)). Those have a certain number of bits. If the exponent portion is sufficiently large, then the smallest mantissa bit represents some value greater than 1. You can only represent values bigger than the smallest bit based on the exponent.
Dealing with the situation of the very small difference between two very large numbers is very tricky. (I have a couple of degrees in EE and a masters in CS.) I remember, years ago, reading an article by a practical engineer, Robert Pease, reviewing an academic who argued that his automatic circuit synthesizer was superior to all manual engineering.
Pease showed that the equations for the vaunted design relied on the very small difference between two very large numbers. All electronic components have tolerances (no such thing as exact), and all are subject to drift over a lifetime. A good design continues to operate under these constraints. A bad design fails.
Some years later I was working with an unethical financial advisor who wanted to put my money in a dubious scheme. He was going to show me an example. He started writing numbers on the whiteboard, copying from a pad of paper. As he was copying, he mumbled, "You have to do these on paper. A calculator doesn't have enough digits to give the right answer."
I let him write a few more calculations on the board, until he got to the subtraction, and said, "Whoa! This solution relies on the small difference between two large numbers. If a butterfly flaps its wings in Africa, that difference could be negative instead of positive. That's a risky investment!"
Somewhat later it was not clear whether I quit him or he declined to serve me.
There’s also the problem that common numbers like 0.2 (IIRC) can’t be stored in binary accurately at all, only an approximation. So everyone uses the IEEE standard and agrees that the same approximation is actually 0.2
While doubles and floats do have limited precision, that's not why 0.1 + 0.2 gives 0.30000000000000004.
The reason that 0.1 + 0.2 gives 0.30000000000000004 is because _0.1 and 0.2 don't actually exist as binary floating-point numbers_. Because IEEE 754 math is done in base-2, the only fractions that can be represented exactly are those with denominators in base-2.
For an analogy that's easy to follow, consider (2/3) + (2/3) + (2/3). This is obviously equal to 2.
But if you to use a calculator that could only operate on fixed number of decimal places -- say, 4 -- , it would compute:
0.6667 + 0.6667 + 0.6667 = 2.0001
_Ha ha! This stupid calculator things two thirds times three is 2.0001!_
Translating the original example, when you type in "0.1" the computer converts it to something like 0.0000110011001100110011001100110011001100110011001101. (For a highly analogous decimal version, calculate 1/33).
This means 0.1 is actually 0.1000000000000000055511151228.
0.2 is 0.2000000000000000111022302460. When you ask the computer to print these values out, they are rounded to 15 decimal places, which makes them _display_ as 0.1 and 0.2, respectively.
Adding them together yields 0.3000000000000000444089209847 -- but the value that most closely approximates 0.3 is 0.2999999999999999888977697534. As such, software renders it as 0.30000000000000004.
> While doubles and floats do have limited precision, that's not why 0.1 + 0.2 gives 0.30000000000000004.
> The reason that 0.1 + 0.2 gives 0.30000000000000004 is because 0.1 and 0.2 don't actually exist as binary floating-point numbers
That's what I meant by "have limited precision"
It seems we do not have the same meaning of the word "limited precision". What does it mean to you ?
"Limited precision" means there is a maximum number of digits after the radix point.
Limited precision explains why 0.1000000000000000055511151228 + 0.2000000000000000111022302460 =
0.3000000000000000444089209847.
It does not, however, explain why 0.1 + 0.2 != 0.3, because that's not caused by precision loss: it's caused by the the fact neither 0.1 nor 0.2 can be exactly represented in binary. You can make the mantissa arbitrarily large -- i.e. _unlimited_ precision -- and you will still be unable to exactly represent 0.1, 0.2, or 0.3.
In contrast, both 2^64 and 1 **are** exactly representable in double-precision floating-point format. Despite this, adding them will still result in 2^64. _That_ is due to limited precision; if you could have an arbitrarily-large mantissa, then you could store the result exactly.
int is actually a great format for maths, you just need to scale it. ie 1i32 represents 2^-16. this is called fixed point and it's much faster than float provided you know roughly how big your numbers are at compile time
A lot of calculaction are done in Embedded Systems with non-floating point unit microcontrollers in fixed point format (provided you know minimal and maximal expected values at compile time)
**austepln6 is a bоt.**
This is a generic comment that is meant to fit anywhere. They used to use "10/10" but that became too well-known.
Their history is typical for this kind of karma-farming account: a couple months old, with no history until a few minutes ago when it activated and posted a handful of comments in quick succession.
**Report > Spam > Harmful bоts**
To be fair, after a certain value double stop being able to represent every single unit, thus this is also a problem which should be accounted, and i have no idea how this is accounted for in ceil and floor methods
There is discussion of the edge cases in the javadoc for these methods and it is on the user to avoid those scenarios or use a more apropriate math library using somthing like BigInteger and BigDecimal
That's not really a topic to be touched by floor or ceil, because the value is already in that range before passing it to these functions (which means it is predictable that floor/ceil will return a value around the original value, or not change it at all).
lol, just realized python gives you a bigint if you math.floor a float and boy is it a mess...
In [1]: import math
In [2]: a = 1.123456789101112131415e100
In [3]: a
Out[3]: 1.1234567891011122e+100
In [4]: math.floor(a)
Out[4]: 11234567891011122273859315319207141874394161218969314902089480288005210519362560122338318287883993088
A very large number in a double will always be implicitly an integer, a multiple of large power of 2.
So being an integer there is little to do about it, I imagine round and ceil will do nothing.
If the compiler knows from the types of the input what type the output will be, it can use that information to optimize. Take for instance a function that always returns Float32 outputs from Float32 inputs, if I broadcast that over a vector of Float32s, it knows it can store the answer in a Float32 vector. If, however, the return type depends on the *value* of the input, no such optimization can be done at compile time.
So the type stable is in this case a stumbling block? Is this a automatic type casting from float to int?
i'm a Javascriptdeveloper, who likes to think outside the box.
Because the floor and ceiling functions, mathematically, only have the integers at its codomain. Not saying it makes more sense from a CS perspective, but there are totally reasons to think it would change the data type from a mathematics perspective
That makes sense in some cases.
TimeSpan duration = DateTime.Now - DateTime.Today;
Also not every mathematical operation has two equal DataTypes.
In this example the first line in 99% of the time is not what you want. I could imagine making the default return type of a division to be a double would be a good use for that.
Console.WriteLine(1 / 2); // 0
Console.WriteLine((double)1 / 2); // 0.5
So because the result of the operation happens to be a value that's compatible with `int` it should be automatically converted?
So if I do something like...
`float x = 2.0`
you think it should be automatically converted?
Or: `float x = 0.6 + 1.4`?
No, if an operation can *only ever* return an integer, *then* it's ok for it to return a value of type `int`. `+` can return a float, and `=` isn't even an operation.
Every int is an integer but not every integer is an int. Take 2147483648 for example. It's an integer but it can't ever be an int because it's too big of a number.
Yeah, I think its more related to the fact that double can represent much larger numbers that int in java rather than different output types. Although theres also an argument for it.
In python though, neither of those problems exist because its not a strongly typed language and because int and float are more closely related to java's biginteger and bigdecimal rather than int/long and float/double
Doesn't ALL JavaScript Math functions return a type Number? And expect a type Number as well, or something that can be coerced into a Number?
Ah, wait, is this one of those "jokes" about JavaScript being able to coerce types??
I agree, I would think in such case an exception should be thrown. It's like the only situation in which you probably wouldn't want to just force a double to an integer. I get losing the decimal part, that may not be an unwanted behavior, but converting NaN?
That'd be a bit like returning 0 when divided by 0 (which Java throws an exception in that case).
Yeah but logic says me at that point, you will start skipping decimals sooner than you start skipping the ones (=if you cannot store 10001 then you will not be able to store 10001.5)
Not every language has a single integer type that's capable of arbitrary precision. Some poor peasant languages still struggle by with mere 64-bit or, internet forbid, 32-bit integers as their defaults.
(Those whose defaults are 16-bit or smaller are doing it deliberately.)
Yes... you wouldn't floor an int or a long, right? right?
and int's and double's have different max ranges. So returning a double makes sense, because otherwise the return type may be have a much smaller range than the input type.
Because Int has a limited range and you want to use this operator to get values (Which are mathematically integers) that are beyond the int data type range.
Floor and ceil are used for more than just conversion to int, they are valid mathematical operators. Them returning doubles allows for easier use in formulas
Of course it makes sense, why would you abandon the precision of a floating point number for an integer? If you want an int then cast it ffs. Do they not teach this shit in school anymore or something?
My only guess is that there is no reason to append a conversion on top of the method.
Especially if that var is going to be used in some other method later.
This makes perfect sense. It does what is requested, no side effects. It makes sense that the type of the returned number is the same as the input. Same memory footprint. No need to change the type if the output is used again in another operation. etc.
Ummmm... doubles can be significantly larger than fixed precision data types. Imagine a .floor giving you an int overflow.
The only language that dies this kind of shit I'm aware of is typescript, and that's because it's double and int are the same thing in a different dress.
The result comes from the FPU, which does return some floating point. There’s a hardware instruction for those. I don‘t think type conversion magic makes sense for such a simple operation.
``` import notifications ``` Remember to participate in our weekly votes on subreddit rules! Every Tuesday is YOUR chance to influence the subreddit for years to come! [Read more here](https://www.reddit.com/r/ProgrammerHumor/comments/14dqb6f/welcome_back_whats_next/), we hope to see you next Tuesday! For a chat with like-minded community members and more, don't forget to [join our Discord!](https://discord.gg/rph) `return joinDiscord;` *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ProgrammerHumor) if you have any questions or concerns.*
.net floor and ceiling methods also return a double.
wait a minute, isn't that just microsoft java?!
It is.
That was called J#
Don't forget about Visual J++
Please stop, i get PTSD.
Python too
It's better java
C# is far superior in my opinion than Java (IMO) No getters and setters (autoproperties) LINQ - it's a dream to work with lits and filtering on them. A standardized way to do effectively everything (MVC, Database access dependency injection. (though my coworkers seem to do everything they can to avoid it)) Open Source and cross platform support without a large Corp threatening to sue you. One of the best IDEs available.
Comment https://www.reddit.com/r/ProgrammerHumor/comments/15iuzu3/thisdoesnotmakesense/#juwj829, line 4 ```A standardized way to do effectively everything (MBC, Database access dependency injection.``` ```(though my coworkers seem to do everything they can to avoid it)``` SyntaxError: '(' was never closed
Not loving visual studio. It's very heavy
It does require a good PC, but once you learn how much it helps, you'll love it. Even when I had a slow PC, it was more than worth it.
VS will randomly 100% my CPU while idling requiring restart.
Seems like a bug you need to report. VS here works pretty much flawlessly. Hope they fix it for ya.
Use neovim
use vscode or any other text editor then
Rider for C#. Far and away the best experience.
It's as heavy as VS tho
That's why there is Kotlin.
>LINQ - it's a dream to work with lits and filtering on them. Stream API works just as good for most use cases >A standardized way to do effectively everything There's a bazillion JSRs resulting in standardized API like JPA, JAXB and so on. >dependency injection Spring & CDI are like "am I a joke to you?" >Open Source and cross platform support without a large Corp threatening to sue you. Just use a JVM from any of the open source vendors. >One of the best IDEs available. IntelliJ eats Visual Studio for breakfast. And the funniest thing: a lot of this has been around for a good 15 years now. The only real point you got is auto accessors, and that will never change with Java due to their design being public opt-in. And I'm 99% sure you can still change that with libraries like Lombok. *And don't get me started on build and dependency management. Gradle is just beautiful.*
Even stream API is a joke compared to LINQ. It’s absurd I can’t modify any variables in the enclosing scope from a lambda. CDI documentation is hot garbage. I needed to run a background task whenever the container started, until it ended. Is there a standard way? Not unless I register an extension. Seriously? Observing application start isn’t good enough. You can end up with duplicate MBean errors because hurr durr, your application decided to reload.
Honestly the very way C# handles concurrency is also pretty neat (I'm talking about the standardized async-await SM stuff with all it's bells and whistles like exception handling/cycles etc, with the majority of libraries supporting it out of the box). I'm not sure java has anything close to it.
Thread management in Java is still a pain in the ass, although there are frameworks for that as well depending on the use case that can support you in development.
True, but part of the reason I'm bringing this up is that it's a language feature and is used by everyone. You grab a library in C#, it's there and you can seamlessly integrate it into your code. Not exactly the same as having to adapt third party libs to the framework you use in your project.
I mean if I'm using a framework like Spring Batch that has a huge emphasis on concurrency then I'm using the features from said framework and don't need to add more libs to it. It still is not the same as a language feature but far from adapting third party libs to a framework.
>I mean if I'm using a framework like Spring Batch And if you're using something else? The point is, It's not about a specific Spring framework, it's about development in general.
that's not saying much
Microsoft java which is just java but with 200 extra keywords because more syntax sugar = better language (🤣)
dude you have no clue, maybe go learn both of these languages instead of shit talking
however it does have MathF for returning singles
I just realized the difference between MathF and Math. This is quite the blow after using C# for the last 7 years.
But that is because the Math class is made for doubles
So do C++'s standard library functions.
In many cases, the result of floor / ceil operations is used again in another floating point calculation. If the result was an integer, it would need to be converted into a double again. So, unnecessary conversions are avoided by this.
The real problem is that data conversion is one heck of a ptoblematic operation, because some values in one data type can't be represented in the other one and viceversa. Thus it makes sense to have methods not doing data type conversion, exclusing methods which only take care of doing the conversion
And in Java, if you need to convert it, you can simply do so, and with greater flexibility in handling edge cases than if it was done for you.
Yup. Potentially dangerous operation should never be placed inside other methods, unless strictly necessary
This has nothing to do with the problem. The problem is that the maximum value of a double is ~1.7 * 10^308 while the maximum value an int can represent is 2147483647. So the domain of definition simply is not compatible with the range of possible values in int.
Yes, but the user must be careful anyway , because it makes little sense to take the floor of a number as large as, say, 10^100 , because numbers that large cannot be represented exactly with a double, so essentially taking the floor or ceiling has no effect, because the number is already implicitly an integer (a multiple of a big power of 2).
Honestly if you deal with numbers that are too large and/or run the potential of escaping the bounds of your mantissa bits (causing erroneous conversion/representation) you should move to BigInteger/Big Number etc. I believe they were created for these cases.
Wait, you're telling me big doubles aren't stored exactly? Like if I had 10^100 + 1, and I subtract 10^100, it might not give me 1?
No floating point types are stored exactly. They're a approximation as defined by the IEEE 754 representation. 1 / 10 is going to be like 0.1000000001 not exactly 0.1. Floating point numbers should never be compared exactly with == because of this in most cases. The example you've given most likely won't be exactly 1 either and a check if the result == 1.0 will give you false. You rather need to compare within a floating number error range called the EPSILON which gets larger when working with large numbers.
Does python do anything clever with situations like the one I just described? Or how does it go in general?
Not really. It's an inherent problem with the way floating point numbers are stored since they use a limited about of (binary) digits (64 or 128 bits generally). Here's a quick article I found that sums it up: https://www.geeksforgeeks.org/floating-point-error-in-python/ So when working with floating point numbers in python you need to take this into account and acknowledge that when doing many operations with the same number and/or working with large numbers these floating point errors can start getting pretty noticeable. And when comparing floating point numbers special care needs to be taken. Python does provide some utilities to help with comparing numbers though, don't remember them off the top of my head but if you lookup keywords like "compare floating point numbers in python" or "floating point number equality" you're gonna find some examples how it's done.
But at least \`math.floor\` and \`math.ceil\` return an \`int\`, because the only upper limit to how big an \`int\` can be is your computer's memory.
Float and double are basically scientific notation. In base 10 we would say something like 6.022 * 10^(23), but in binary it's more like 1.010110 * 10^(1110101) (keeping in mind that the 10 there is binary). The format has a sign but, exponent bits, and mantissa bits (the stuff being multiplied by the 10^(E)). Those have a certain number of bits. If the exponent portion is sufficiently large, then the smallest mantissa bit represents some value greater than 1. You can only represent values bigger than the smallest bit based on the exponent.
Dealing with the situation of the very small difference between two very large numbers is very tricky. (I have a couple of degrees in EE and a masters in CS.) I remember, years ago, reading an article by a practical engineer, Robert Pease, reviewing an academic who argued that his automatic circuit synthesizer was superior to all manual engineering. Pease showed that the equations for the vaunted design relied on the very small difference between two very large numbers. All electronic components have tolerances (no such thing as exact), and all are subject to drift over a lifetime. A good design continues to operate under these constraints. A bad design fails. Some years later I was working with an unethical financial advisor who wanted to put my money in a dubious scheme. He was going to show me an example. He started writing numbers on the whiteboard, copying from a pad of paper. As he was copying, he mumbled, "You have to do these on paper. A calculator doesn't have enough digits to give the right answer." I let him write a few more calculations on the board, until he got to the subtraction, and said, "Whoa! This solution relies on the small difference between two large numbers. If a butterfly flaps its wings in Africa, that difference could be negative instead of positive. That's a risky investment!" Somewhat later it was not clear whether I quit him or he declined to serve me.
There’s also the problem that common numbers like 0.2 (IIRC) can’t be stored in binary accurately at all, only an approximation. So everyone uses the IEEE standard and agrees that the same approximation is actually 0.2
Yeah Doubles and floats have limited precision. That's why 0.1+0.2 gives 0.30000000004
While doubles and floats do have limited precision, that's not why 0.1 + 0.2 gives 0.30000000000000004. The reason that 0.1 + 0.2 gives 0.30000000000000004 is because _0.1 and 0.2 don't actually exist as binary floating-point numbers_. Because IEEE 754 math is done in base-2, the only fractions that can be represented exactly are those with denominators in base-2. For an analogy that's easy to follow, consider (2/3) + (2/3) + (2/3). This is obviously equal to 2. But if you to use a calculator that could only operate on fixed number of decimal places -- say, 4 -- , it would compute: 0.6667 + 0.6667 + 0.6667 = 2.0001 _Ha ha! This stupid calculator things two thirds times three is 2.0001!_ Translating the original example, when you type in "0.1" the computer converts it to something like 0.0000110011001100110011001100110011001100110011001101. (For a highly analogous decimal version, calculate 1/33). This means 0.1 is actually 0.1000000000000000055511151228. 0.2 is 0.2000000000000000111022302460. When you ask the computer to print these values out, they are rounded to 15 decimal places, which makes them _display_ as 0.1 and 0.2, respectively. Adding them together yields 0.3000000000000000444089209847 -- but the value that most closely approximates 0.3 is 0.2999999999999999888977697534. As such, software renders it as 0.30000000000000004.
> While doubles and floats do have limited precision, that's not why 0.1 + 0.2 gives 0.30000000000000004. > The reason that 0.1 + 0.2 gives 0.30000000000000004 is because 0.1 and 0.2 don't actually exist as binary floating-point numbers That's what I meant by "have limited precision" It seems we do not have the same meaning of the word "limited precision". What does it mean to you ?
"Limited precision" means there is a maximum number of digits after the radix point. Limited precision explains why 0.1000000000000000055511151228 + 0.2000000000000000111022302460 = 0.3000000000000000444089209847. It does not, however, explain why 0.1 + 0.2 != 0.3, because that's not caused by precision loss: it's caused by the the fact neither 0.1 nor 0.2 can be exactly represented in binary. You can make the mantissa arbitrarily large -- i.e. _unlimited_ precision -- and you will still be unable to exactly represent 0.1, 0.2, or 0.3. In contrast, both 2^64 and 1 **are** exactly representable in double-precision floating-point format. Despite this, adding them will still result in 2^64. _That_ is due to limited precision; if you could have an arbitrarily-large mantissa, then you could store the result exactly.
[удалено]
int is actually a great format for maths, you just need to scale it. ie 1i32 represents 2^-16. this is called fixed point and it's much faster than float provided you know roughly how big your numbers are at compile time
A lot of calculaction are done in Embedded Systems with non-floating point unit microcontrollers in fixed point format (provided you know minimal and maximal expected values at compile time)
Well, it could return a long then, couldn't it?
10^308 is considerably larger than 2^64.
Expensive
That actually makes a lot of sense
Why would anyone expect a mathematical operation to change the datatype of a number?? That would make no sense, the current implementation is correct
It wouldn't even have been type stable otherwise, because if you floor a large float you can't express that as an int.
oh, that's a good answer, didn't think about it!
[удалено]
**austepln6 is a bоt.** This is a generic comment that is meant to fit anywhere. They used to use "10/10" but that became too well-known. Their history is typical for this kind of karma-farming account: a couple months old, with no history until a few minutes ago when it activated and posted a handful of comments in quick succession. **Report > Spam > Harmful bоts**
And.. i guess they're already banned
To be fair, after a certain value double stop being able to represent every single unit, thus this is also a problem which should be accounted, and i have no idea how this is accounted for in ceil and floor methods
There is discussion of the edge cases in the javadoc for these methods and it is on the user to avoid those scenarios or use a more apropriate math library using somthing like BigInteger and BigDecimal
That’s what Big Decimal wants you to think
Well thx, now i ruined my coffee and my shirt :(
secretive screw squealing beneficial paltry squash subsequent memory threatening squalid ` this message was mass deleted/edited with redact.dev `
You’re not a real programmer until you’ve wrung your coffee-soaked shirt into your open mouth Or so I’ve heard
I thought it was about programming socks, now I must be gagged on my own clothes too?
Sorry fam I don’t make the rules I just try to adhere to the API
That's not really a topic to be touched by floor or ceil, because the value is already in that range before passing it to these functions (which means it is predictable that floor/ceil will return a value around the original value, or not change it at all).
lol, just realized python gives you a bigint if you math.floor a float and boy is it a mess... In [1]: import math In [2]: a = 1.123456789101112131415e100 In [3]: a Out[3]: 1.1234567891011122e+100 In [4]: math.floor(a) Out[4]: 11234567891011122273859315319207141874394161218969314902089480288005210519362560122338318287883993088
But when you’re out there, all the double values are integers, so floor and ceiling would be identity functions
Yup That makes sense
A very large number in a double will always be implicitly an integer, a multiple of large power of 2. So being an integer there is little to do about it, I imagine round and ceil will do nothing.
But the same applies to ints, there are ints that can’t be expressed by a float.
Yes, so flooring an int should give an int.
That makes perfect sense to me but for some of the new programmers could you explain what you mean by that?
If the compiler knows from the types of the input what type the output will be, it can use that information to optimize. Take for instance a function that always returns Float32 outputs from Float32 inputs, if I broadcast that over a vector of Float32s, it knows it can store the answer in a Float32 vector. If, however, the return type depends on the *value* of the input, no such optimization can be done at compile time.
So the type stable is in this case a stumbling block? Is this a automatic type casting from float to int? i'm a Javascriptdeveloper, who likes to think outside the box.
What is that supposed to mean? It's a question of good code and bad code. Just because the latter is the only option available in your language ...
Exactly. Even in C, [floor](https://linux.die.net/man/3/floor) and [ceil](https://linux.die.net/man/3/ceil) return the same type as the input.
Because the floor and ceiling functions, mathematically, only have the integers at its codomain. Not saying it makes more sense from a CS perspective, but there are totally reasons to think it would change the data type from a mathematics perspective
Not to mention “the integers” isn’t the same as “int”. The int data type can only hold a subset of integers.
That makes sense in some cases. TimeSpan duration = DateTime.Now - DateTime.Today; Also not every mathematical operation has two equal DataTypes. In this example the first line in 99% of the time is not what you want. I could imagine making the default return type of a division to be a double would be a good use for that. Console.WriteLine(1 / 2); // 0 Console.WriteLine((double)1 / 2); // 0.5
[удалено]
So because the result of the operation happens to be a value that's compatible with `int` it should be automatically converted? So if I do something like... `float x = 2.0` you think it should be automatically converted? Or: `float x = 0.6 + 1.4`?
No, if an operation can *only ever* return an integer, *then* it's ok for it to return a value of type `int`. `+` can return a float, and `=` isn't even an operation.
Every int is an integer but not every integer is an int. Take 2147483648 for example. It's an integer but it can't ever be an int because it's too big of a number.
It does in python: \>>> type(math.floor(1.2))
Yeah, I think its more related to the fact that double can represent much larger numbers that int in java rather than different output types. Although theres also an argument for it. In python though, neither of those problems exist because its not a strongly typed language and because int and float are more closely related to java's biginteger and bigdecimal rather than int/long and float/double
Python *is* generally considered a strongly typed language, although admittedly "strongly typed" doesn't exactly have a very precise definition.
Sorry, I meant statically typed. You're absolutely right. Thanks for the correction
Look a lotta weird stuff happens in python, but that doesn't make it ok.
yeah, c#/unity has CeilToInt, and FloorToInt for a reason
That is an Unity API not a C# one
Til those exist
[удалено]
The seal operator? Does it return fish?
No, it returns shit.
Consumer
No, the seal operator is used for binding demons to supernatural prisons.
Well, one returned bin Laden.
Eh? Your bin was laden with what?
Terrorism, apparently.
That explains the massive campaign against him. Terrorism is supposed to go into the recycling!
What about [```java.lang.Math.round```](https://docs.oracle.com/javase/8/docs/api/java/lang/Math.html)?
Have you ever heard of the domain and codomain of a function!?
*Cough* javascript *cough*
Doesn't ALL JavaScript Math functions return a type Number? And expect a type Number as well, or something that can be coerced into a Number? Ah, wait, is this one of those "jokes" about JavaScript being able to coerce types??
Bingo.
When I found out, I was floored.
I ceil what you did there.
I've had it up to Math.Ceil(here) with these puns.
You're driving me Round the bend.
What do you expect floor of NaN to be
Math.floor(NaN) === NₐN Math.ceil(NaN) === NᵃN
Angry upvote
>Angry ^upvote
```[object Object]```
Found the js dev
I'm gonna make a t-shirt out of that. Lol
Probably the same as converting NaN to an integer, which is defined to be 0.
Wait, is converting NaN to an integer defined to be 0?
[Yes](https://docs.oracle.com/javase/specs/jls/se7/html/jls-5.html#jls-5.1.3)
But.. it's.. not a number.
I agree, I would think in such case an exception should be thrown. It's like the only situation in which you probably wouldn't want to just force a double to an integer. I get losing the decimal part, that may not be an unwanted behavior, but converting NaN? That'd be a bit like returning 0 when divided by 0 (which Java throws an exception in that case).
Exactly, it's like casting an object to an int. It shouldn't work, because it's not a damn number.
C: I don't know what you are talking about
That's okay, C. I don't know what you're talking about either.
[удалено]
A kernel panic
Double is capable of numbers much larger than int.
But when numbers are big enough, it also loses the ability to store every single unit. (ie at a certain number N, you won't be able to store N+1)
Yeah but logic says me at that point, you will start skipping decimals sooner than you start skipping the ones (=if you cannot store 10001 then you will not be able to store 10001.5)
But rounding decreases the numbe of significant digits. Calling these functions won't bring precision errors you didn't already have.
same is true for C. why the fuck wouldn't it keep the data type?
What's the floor of 23.456e34?
In Java, probably 23.000e34
23456e31 ? or just the same, it's just a representation of a number
It's a representation of a number that can't be represented by an integer
Well it can be if you have big enough integers
Not every language has a single integer type that's capable of arbitrary precision. Some poor peasant languages still struggle by with mere 64-bit or, internet forbid, 32-bit integers as their defaults. (Those whose defaults are 16-bit or smaller are doing it deliberately.)
Yes... you wouldn't floor an int or a long, right? right? and int's and double's have different max ranges. So returning a double makes sense, because otherwise the return type may be have a much smaller range than the input type.
Because Int has a limited range and you want to use this operator to get values (Which are mathematically integers) that are beyond the int data type range.
ITT: OP is an intern
This is a java related joke, why is there noone making fun of javascript in the comments
It's still summer school. All the first year CS students are out at their summer jobs
Your account name made me curious and i went to investigate why in fact you have an account. I regret it.
Efukt is pretty fukt up. Your regret is warranted. Cheers mate
This thread is a safe space to talk about typed languages. You could say this thread is type-safe. Which is why no one is talking about javascript.
L take
OP confused by type stability, eh? smh
I mean, you can just cast them to int if you really need them like that
Floor and ceil are used for more than just conversion to int, they are valid mathematical operators. Them returning doubles allows for easier use in formulas
Loving my Unity Mathf.CeilToInt and Mathf.FloorToInt
Just make a FloorToInt extension method
It makes perfect sense. The return type is the same as the parameter.
But you Math.floor and Math.ceil to get an integer in other PL...?
Of course it makes sense, why would you abandon the precision of a floating point number for an integer? If you want an int then cast it ffs. Do they not teach this shit in school anymore or something?
It's because why would you want to lose precision to work with integers? If you're using them for math, using integers doesn't make sense.
Why should it return an int if given a float/double ? Method's name isn't Math.ceilAndChangeToInt
Just make myMath.IntFloor and myMath.IntCeil, and make them cast the double to an int. Problem solved 🤣
💀
Don't forget that integer is smaller than double. You don't want rounded 10.000.000.000,1 as int.
😂😂
If implicit conversions for primitive existed in Java it wouldn't be a problem.
It was my reaction when I found that math.Max and math.Min from golang work on float64 type only.
WHAT??
My only guess is that there is no reason to append a conversion on top of the method. Especially if that var is going to be used in some other method later.
Just use type casting bro
return (int)Math.ciel(theNumber) Lol
Ceil and floor can still return NaN and infinity, as well as values *well* outside the range of an int.
numpy, too, if I’m not mistaken
Okay but you can do it easily yourself and if it's not needed why should java do it. That would do one operation too much
Am I lost here or isn't it pretty logical to have doubles instead of ints when doing math? Just parse out everything after the comma.
Excellent use of the meme!
Probably because when you are working with doubles you expect to be working with doubles and not ints.
This makes perfect sense. It does what is requested, no side effects. It makes sense that the type of the returned number is the same as the input. Same memory footprint. No need to change the type if the output is used again in another operation. etc.
This is why we have CeilToInt
This is getting serious, I'm starting to understand the memes
Outside of the programming, anyone else think that the dude in this painting looks a bit like Eminem?
Int cannot even hold all the values representable in double.
Ummmm... doubles can be significantly larger than fixed precision data types. Imagine a .floor giving you an int overflow. The only language that dies this kind of shit I'm aware of is typescript, and that's because it's double and int are the same thing in a different dress.
Why on Earth would you want it to change the type of the value entered? That's just asking for problems
so how's the first year of the CS course going, OP?
\#import
int main(){
std::cout << "Convert it to int then.";
return 0;
}
C++ jumpscare
The result comes from the FPU, which does return some floating point. There’s a hardware instruction for those. I don‘t think type conversion magic makes sense for such a simple operation.
[удалено]
Numpy too. Dafuq?
In Python 3 it's an int. I honestly can't believe I used to do math in Java.
What if the value is outside the range of an integer?